Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Prerequisites


Review the following prerequisites to ensure that your environment is prepared for migration.

3.1. Software requirements

Migration Toolkit for Virtualization (MTV) has software requirements for all providers as well as specific software requirements per provider.

3.1.1. Software requirements for all providers

You must install compatible versions of Red Hat OpenShift and OpenShift Virtualization.

3.2. Storage support and default modes

Migration Toolkit for Virtualization (MTV) uses the following default volume and access modes for supported storage.

Expand
Table 3.1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

Note

If the OpenShift Virtualization storage does not support dynamic provisioning, you must apply the following settings:

  • Filesystem volume mode

    Filesystem volume mode is slower than Block volume mode.

  • ReadWriteOnce access mode

    ReadWriteOnce access mode does not support live virtual machine migration.

See Enabling a statically-provisioned storage class for details on editing the storage profile.

Note

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

Note

When you migrate from OpenStack, or when you run a cold migration from Red Hat Virtualization to the Red Hat OpenShift cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

3.3. Network prerequisites

The following prerequisites apply to all migrations:

  • IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.
  • The network connections between the source environment, the OpenShift Virtualization cluster, and the replication repository must be reliable and uninterrupted.
  • If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

3.3.1. Ports

The firewalls must enable traffic over the following ports:

Expand
Table 3.2. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

VMware provider inventory

Disk transfer authentication

443

TCP

OpenShift nodes

VMware ESXi hosts

Disk transfer authentication

902

TCP

OpenShift nodes

VMware ESXi hosts

Disk transfer data copy

Expand
Table 3.3. Network ports required for migrating from Red Hat Virtualization
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

RHV Engine

RHV provider inventory

Disk transfer authentication

54322

TCP

OpenShift nodes

RHV hosts

Disk transfer data copy

3.4. Source virtual machine prerequisites

The following prerequisites apply to all migrations:

  • ISO images and CD-ROMs are unmounted.
  • Each NIC contains either an IPv4 address or an IPv6 address, although a NIC may use both.
  • The operating system of each VM is certified and supported as a guest operating system for conversions.
Note

You can check that the operating system is supported by referring to the table in Converting virtual machines from other hypervisors to KVM with virt-v2v. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts.

  • VMs that you want to migrate with MTV 2.6.z run on RHEL 8.
  • VMs that you want to migrate with MTV 2.7.z run on RHEL 9.
  • The name of a VM must not contain a period (.). Migration Toolkit for Virtualization (MTV) changes any period in a VM name to a dash (-).
  • The name of a VM must not be the same as any other VM in the OpenShift Virtualization environment.

    Warning

    MTV has limited support for the migration of dual-boot operating system VMs.

    In the case of a dual-boot operating system VM, MTV will try to convert the first boot disk it finds. Alternatively the root device can be specified in the MTV UI.

    Warning

    For virtual machines (VMs) running Microsoft Windows, Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. 

    When performing a warm migration of a Microsoft Windows virtual machine from VMware, you must start VSS on the Windows guest OS in order for the snapshot and Quiesce guest file system to succeed.

    If you do not start VSS on the Windows guest OS, the snapshot creation during the Warm migration fails with the following error:

    An error occurred while taking a snapshot: Failed to restart the virtual machine

    If you set the VSS service to Manual and start a snapshot creation with Quiesce guest file system = yes. In the background, the VMware Snapshot provider service requests VSS to start the shadow copy.

    Note

    Migration Toolkit for Virtualization automatically assigns a new name to a VM that does not comply with the rules.

    Migration Toolkit for Virtualization makes the following changes when it automatically generates a new VM name:

    • Excluded characters are removed.
    • Uppercase letters are switched to lowercase letters.
    • Any underscore (_) is changed to a dash (-).

    This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.

VMs with Secure Boot enabled might not be migrated automatically

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

Windows VMs which are using Measured Boot cannot be migrated

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

The alternative to migration is to re-create the Windows VM directly on OpenShift Virtualization.

3.5. Red Hat Virtualization prerequisites

The following prerequisites apply to Red Hat Virtualization migrations:

  • To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.
Important

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

  • To migrate virtual machines:

    • You must have one of the following:

      • RHV admin permissions. These permissions allow you to migrate any virtual machine in the system.
      • DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.
    • You must use a compatible version of Red Hat Virtualization.
    • You must have the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Manager Apache CA certificate.

      You can obtain the Manager CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

    • If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
Note
  • Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
  • LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

3.6. OpenStack prerequisites

The following prerequisites apply to OpenStack migrations:

MTV versions 2.6 and later support the following authentication methods for migrations with OpenStack source providers in addition to the standard username and password credential set:

  • Token authentication
  • Application credential authentication

You can use these methods to migrate virtual machines with OpenStack source providers using the command-line interface (CLI) the same way you migrate other virtual machines, except for how you prepare the Secret manifest.

3.6.1.1. Using token authentication with an OpenStack source provider

You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.

MTV supports both of the following types of token authentication:

  • Token with user ID
  • Token with user name

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

Prerequisites

Have an OpenStack account.

Procedure

  1. In the dashboard of the OpenStack web console, click Project > API Access.
  2. Expand Download OpenStack RC file and click OpenStack RC file.

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    OS_AUTH_URL
    OS_PROJECT_ID
    OS_PROJECT_NAME
    OS_DOMAIN_NAME
    OS_USERNAME
  3. To get the data needed for token authentication, run the following command:

    $ openstack token issue

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

  4. Create a Secret manifest similar to the following:

    • For authentication using a token with user ID:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-tokenid
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: token
        token: <token_from_openstack_token_output>
        projectID: <projectID_from_openstack_token_output>
        userID: <userID_from_openstack_token_output>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF
    • For authentication using a token with user name:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-tokenname
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: token
        token: <token_from_openstack_token_output>
        domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
        projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
        username: <OS_USERNAME_from_openstack_rc_file>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF

You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.

MTV supports both of the following types of application credential authentication:

  • Application credential ID
  • Application credential name

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

Prerequisites

You have an OpenStack account.

Procedure

  1. In the dashboard of the OpenStack web console, click Project > API Access.
  2. Expand Download OpenStack RC file and click OpenStack RC file.

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    OS_AUTH_URL
    OS_PROJECT_ID
    OS_PROJECT_NAME
    OS_DOMAIN_NAME
    OS_USERNAME
  3. To get the data needed for application credential authentication, run the following command:

    $ openstack application credential create --role member --role reader --secret redhat forklift

    The output, referred to here as <openstack_credential_output>, includes:

    • The id and secret that you need for authentication using an application credential ID
    • The name and secret that you need for authentication using an application credential name
  4. Create a Secret manifest similar to the following:

    • For authentication using the application credential ID:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-appid
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: applicationcredential
        applicationCredentialID: <id_from_openstack_credential_output>
        applicationCredentialSecret: <secret_from_openstack_credential_output>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF
    • For authentication using the application credential name:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-appname
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: applicationcredential
        applicationCredentialName: <name_from_openstack_credential_output>
        applicationCredentialSecret: <secret_from_openstack_credential_output>
        domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
        username: <OS_USERNAME_from_openstack_rc_file>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF

3.7. VMware prerequisites

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

Warning

Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.

The following prerequisites apply to VMware migrations:

  • You must use a compatible version of VMware vSphere.
  • You must be logged in as a user with at least the minimal set of VMware privileges.
  • To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.
  • The VM operating system must be certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with virt-v2v.
  • If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
  • If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.
  • It is strongly recommended to disable hibernation because Migration Toolkit for Virtualization (MTV) does not support migrating hibernated VMs.
Warning

For virtual machines (VMs) running Microsoft Windows, Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. 

When performing a warm migration of a Microsoft Windows virtual machine from VMware, you must start VSS on the Windows guest OS in order for the snapshot and Quiesce guest file system to succeed.

If you do not start VSS on the Windows guest OS, the snapshot creation during the Warm migration fails with the following error:

An error occurred while taking a snapshot: Failed to restart the virtual machine

If you set the VSS service to Manual and start a snapshot creation with Quiesce guest file system = yes. In the background, the VMware Snapshot provider service requests VSS to start the shadow copy.

Important

In case of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail.

Note

Neither MTV nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

3.7.1. VMware privileges

The following minimal set of VMware privileges is required to migrate virtual machines to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).

Expand
Table 3.4. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware VIX API.

Virtual machine.Provisioning privileges:

Note

All Virtual machine.Provisioning privileges are required.

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

Tip

Create a role in VMware with the permissions described in the preceding table and then apply this role to the Inventory section, as described in Creating a VMware role to apply MTV permissions

3.7.2. Creating a VMware role to grant MTV privileges

You can create a role in VMware to grant privileges for Migration Toolkit for Virtualization (MTV) and then grant those privileges to users with that role.

The procedure that follows explains how to do this in general. For detailed instructions, see VMware documentation.

Procedure

  1. In the vCenter Server UI, create a role that includes the set of privileges described in the table in VMware prerequisites.
  2. In the vSphere inventory UI, grant privileges for users with this role to the appropriate vSphere logical objects at one of the following levels:

    1. At the user or group level: Assign privileges to the appropriate logical objects in the data center and use the Propagate to child objects option.
    2. At the object level: Apply the same role individually to all the relevant vSphere logical objects involved in the migration, for example, hosts, vSphere clusters, data centers, or networks.

3.7.3. Creating a VDDK image

It is strongly recommended that Migration Toolkit for Virtualization (MTV) should be used with the VMware Virtual Disk Development Kit (VDDK) SDK when transferring virtual disks from VMware vSphere.

Note

Creating a VDDK image, although optional, is highly recommended. Using MTV without VDDK is not recommended and could result in significantly lower migration speeds.

To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

Note

Storing the VDDK image in a public registry might violate the VMware license terms.

Prerequisites

  • Red Hat OpenShift image registry.
  • podman installed.
  • You are working on a file system that preserves symbolic links (symlinks).
  • If you are using an external registry, OpenShift Virtualization must be able to access it.

Procedure

  1. Create and navigate to a temporary directory:

    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
  2. In a browser, navigate to the VMware VDDK version 8 download page.
  3. Select version 8.0.1 and click Download.
Note

In order to migrate to OpenShift Virtualization 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

  1. Save the VDDK archive file in the temporary directory.
  2. Extract the VDDK archive:

    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
  3. Create a Dockerfile:

    $ cat > Dockerfile <<EOF
    FROM registry.access.redhat.com/ubi8/ubi-minimal
    USER 1001
    COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    RUN mkdir -p /opt
    ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    EOF
  4. Build the VDDK image:

    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
  5. Push the VDDK image to the registry:

    $ podman push <registry_route_or_server_path>/vddk:<tag>
  6. Ensure that the image is accessible to your OpenShift Virtualization environment.

3.7.4. Increasing the NFC service memory of an ESXi host

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

Procedure

  1. Log in to the ESXi host as root.
  2. Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    ...
          <nfcsvc>
             <path>libnfcsvc.so</path>
             <enabled>true</enabled>
             <maxMemory>1000000000</maxMemory>
             <maxStreamMemory>10485760</maxStreamMemory>
          </nfcsvc>
    ...
  3. Restart hostd:

    # /etc/init.d/hostd restart

    You do not need to reboot the host.

3.7.5. VDDK validator containers need requests and limits

If you have the cluster or project resource quotas set, you must ensure that you have a sufficient quota for the MTV pods to perform the migration. 

You can see the defaults, which you can override in the ForkliftController custom resource (CR), listed as follows. If necessary, you can adjust these defaults. 

These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the MAX_VM_INFLIGHT setting that determines how many VMs/disks are migrated at once.

Defaults which can be overriden in the ForkliftController CR:

  • This affects both cold and warm migrations:

    For cold migration, it is likely to be more resource intensive as it performs the disk copy. For warm migration, you could potentially reduce the requests.

    • virt_v2v_container_limits_cpu: 4000m
    • virt_v2v_container_limits_memory: 8Gi
    • virt_v2v_container_requests_cpu: 1000m
    • virt_v2v_container_requests_memory: 1Gi

      Note

      Cold and warm migration using virt-v2v can be resource-intensive. For more details, see Compute power and RAM.

  • This affects any migrations with hooks:

    • hooks_container_limits_cpu: 1000m
    • hooks_container_limits_memory: 1Gi
    • hooks_container_requests_cpu: 100m
    • hooks_container_requests_memory: 150Mi
  • This affects any OVA migrations:

    • ova_container_limits_cpu: 1000m
    • ova_container_limits_memory: 1Gi
    • ova_container_requests_cpu: 100m
    • ova_container_requests_memory: 150Mi

3.8. Open Virtual Appliance (OVA) prerequisites

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

  • All OVA files are created by VMware vSphere.
Note

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by MTV. MTV supports only OVA files created by VMware vSphere.

  • The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    • In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      When this structure is used, MTV scans the root folder and the first-level subfolders for compressed packages.

      For example, if the NFS share is, /nfs, then:
      The folder /nfs is scanned.
      The folder /nfs/subfolder1 is scanned.
      But, /nfs/subfolder1/subfolder2 is not scanned.

    • In extracted OVF packages.

      When this structure is used, MTV scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      For example, if the NFS share is, /nfs, then:
      The OVF file /nfs/vm.ovf is scanned.
      The OVF file /nfs/subfolder1/vm.ovf is scanned.
      The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

3.9. OpenShift Virtualization prerequisites

The following prerequisites apply to migrations from one OpenShift Virtualization cluster to another:

  • Both the source and destination OpenShift Virtualization clusters must have the same version of Migration Toolkit for Virtualization (MTV) installed.
  • The source cluster must use OpenShift Virtualization 4.16 or later.
  • Migration from a later version of OpenShift Virtualization to an earlier one is not supported.
  • Migration from an earlier version of OpenShift Virtualization to a later version is supported if both are supported by the current version of MTV. For example, if the current version of OpenShift Virtualization is 4.18, a migration from version 4.16 or 4.17 to version 4.18 is supported, but a migration from version 4.15 to any version is not.
Important

It is strongly recommended to migrate only between clusters with the same version of OpenShift Virtualization, although migration from an earlier version of OpenShift Virtualization to a later one is supported.

3.10. Software compatibility guidelines

You must install compatible software versions.

Expand
Table 3.5. Compatible software versions
Migration Toolkit for VirtualizationRed Hat OpenShiftOpenShift VirtualizationVMware vSphereRed Hat VirtualizationOpenStack

2.8

4.18, 4.17, 4.16

4.18, 4.17, 4.16

6.5 or later

4.4 SP1 or later

16.1 or later

Migration from Red Hat Virtualization 4.3

MTV was tested only with Red Hat Virtualization (RHV) 4.4 SP1. Migration from Red Hat Virtualization (RHV) 4.3 has not been tested with MTV 2.8. While not supported, basic migrations from RHV 4.3 are expected to work.

Generally it is advised to upgrade Red Hat Virtualization Manager (RHVM) to the previously mentioned supported version before the migration to OpenShift Virtualization.

Therefore, it is recommended to upgrade RHV to the supported version above before the migration to OpenShift Virtualization.

However, migrations from RHV 4.3.11 were tested with MTV 2.3, and may work in practice in many environments using MTV 2.8. In this case, we advise upgrading Red Hat Virtualization Manager (RHVM) to the previously mentioned supported version before the migration to OpenShift Virtualization.

3.10.1. OpenShift Operator Life Cycles

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben