Chapter 10. Planning a migration of virtual machines from VMware vSphere


Create a VMware vSphere migration plan by setting up network maps, configuring source and destination providers with migration networks, and defining the migration plan in the Migration Toolkit for Virtualization (MTV) UI.

You can add ownerless network maps by using the form page of the Migration Toolkit for Virtualization (MTV) UI. Later, you can add these maps to a migration plan by using the Use an existing network map option in the Network map page of the MTV wizard.

For more information about network and storage maps in MTV, see Mapping networks and storage in migration plans.

Prerequisites

Procedure

  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
  2. Click Create network map > Create with form.
  3. Specify the following:

    • Network map name: Name of the network map.
    • Project: Select from the list.
    • Source provider: Select from the list.
    • Target provider: Select from the list.
    • Source network: Select from the list.
    • Target network: Select from the list.
  4. Optional: Click Add mapping to create additional network maps, including mapping multiple network sources to a single target network.
  5. Click Create.

    Your map appears in the list of network maps.

You can add ownerless network maps by using YAML or JSON definitions in the YAML page of the Migration Toolkit for Virtualization (MTV) UI to map source networks to OpenShift Virtualization networks. Later, you can add these maps to a migration plan by using the Use an existing network map option in the Network map page of the MTV wizard.

For more information about network and storage maps in MTV, see Mapping networks and storage in migration plans.

Prerequisites

Procedure

  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
  2. Click Create network map > Create with YAML.
  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
  4. If you enter YAML definitions, use the following:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod
          source:
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • type: Allowed values are pod, multus, and ignored. Use ignored to avoid attaching VMs to this network for this migration.
    • source: You can use either the id or the name parameter to specify the source network. For id, specify the VMware vSphere network Managed Object Reference (moRef). For more information about retrieving the moRef, see Retrieving a VMware vSphere moRef in Migrating your virtual machines to Red Hat OpenShift Virtualization.
    • <network_attachment_definition>: Specify a network attachment definition for each additional OpenShift Virtualization network.
    • <network_attachment_definition_namespace>: Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  5. Optional: To download your input, click Download.
  6. Click Create.

    Your map appears in the list of network maps.

You can add ownerless storage maps by using the form page of the Migration Toolkit for Virtualization (MTV) UI. Later, you can add these maps to a migration plan by using the Use an existing storage map option in the Storage map page of the MTV wizard.

For more information about network and storage maps in MTV, see Mapping networks and storage in migration plans.

Prerequisites

Procedure

  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
  2. Click Create storage map > Create with form.
  3. Specify the following:

    • Map name: Name of the storage map.
    • Project: Select from the list.
    • Source provider: Select from the list.
    • Target provider: Select from the list.
    • Source storage: Select from the list.
    • Target storage: Select from the list.
  4. Optional: If this is a storage map for a migration using storage copy offload, specify the following offload options:

    • Offload plugin: Select vSphere XCOPY from the list.
    • Storage secret: Select from the list.
    • Storage product: Select from the list.

      Note

      Storage copy offload is a feature that allows you to migrate VMware virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command vmkfstools on the ESXi host, which invokes the XCOPY command on the storage array using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage copy offload lets you copy data inside a SAN more efficiently than copying the data over a network. For Migration Toolkit for Virtualization (MTV) 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration.

      For more information about storage copy offload, see Migrating VMware virtual machines by using storage copy offload.

  5. Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.
  6. Click Create.

    Your map appears in the list of storage maps.

You can add ownerless storage maps by using YAML or JSON definitions in the YAML page of the Migration Toolkit for Virtualization (MTV) UI. Later, you can add these maps to a migration plan by using the Use an existing storage map option in the Storage map page of the MTV wizard.

For more information about network and storage maps in MTV, see Mapping networks and storage in migration plans.

Prerequisites

Procedure

  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
  2. Click Create storage map > Create with YAML.

    The Create StorageMap page opens.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
  4. If you enter YAML definitions, use the following:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_datastore>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • accessMode: Allowed values are ReadWriteOnce and ReadWriteMany.
    • id: Specify the VMware vSphere datastore moRef, for example, datastore-11. For more information about retrieving the moRef, see Retrieving a VMware vSphere moRef in Migrating your virtual machines to Red Hat OpenShift Virtualization.
  5. Optional: To download your input, click Download.
  6. Click Create.

    Your map appears in the list of storage maps.

Migrate VMware virtual machines (VMs) that are in a storage array network (SAN) more efficiently by using the storage copy offload feature of Migration Toolkit for Virtualization (MTV). Use this feature to accelerate migration speed and to reduce the load on your network.

VMware’s vSphere Storage APIs-Array Integration (VAAI) includes a command named vmkfstools. This command sends the XCOPY command, which is part of the SCSI protocol. The XCOPY command lets you copy data inside a SAN more efficiently than copying the data over a network. The command is executed by a populator named vsphere-xcopy-volume-populator.

Migration Toolkit for Virtualization (MTV) 2.10.0 leverages this command as the basis for storage copy offload, which clones your VMs' data to the storage hardware instead of transmitting it between MTV and OpenShift Virtualization. This improved migration saves both time and resources.

You enable storage copy offload by configuring the storage map in your migration plan to point to your storage array instead of the network you usually use for migration. When you start the migration plan, MTV migrates your VMs by copying them to the storage array you choose and using XCOPY to copy them directly to OpenShift Virtualization, instead of transmitting the contents of your VMs to OpenShift Virtualization.

The storage copy offload feature has some unique configuration prerequisites, which are discussed in Planning and running storage copy offload migrations. After you configure your system, you can migrate plans using storage copy offload by using either the MTV UI or its CLI. Instructions for using storage offload have been integrated into the procedures for migrating VMware VMs for both the UI and CLI.

You must ensure that your migration plans do not mix VDDK mappings with copy-offload mappings. Because the migration controller copies disks either through CDI volumes (VDDK) or through Volume Populators (copy-offload), all storage pairs in the plan must either include copy-offload details (a Secret + product) or exclude them entirely. Otherwise, the plan fails.

For storage copy offload migrations, especially performance-sensitive migrations, it is strongly recommended to delete any preexisting snapshots before you start the migration.

Important

Preexisting snapshots cause MTV to create snapshot delta chains. These chains prevent the array from performing block-level hardware acceleration for the hardware clone because the vmkfstools clone path used for offload cannot use hardware-accelerated XCOPY.

As a result, the operation falls back to a software clone (host-based copy). Although such a fallback migration is faster than a standard migration, it increases migration time and ESXi workload compared to an XCOPY migration.

Snapshots created by a warm migration do not affect storage copy offload migrations. Only preexisting snapshots cause this issue.

For Migration Toolkit for Virtualization (MTV) 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration.

Important

Storage copy offload for warm migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

10.5.1. How storage copy offload works

Without storage copy offload, MTV migrates a virtual disk as follows:

  1. MTV reads the disk from the source storage.
  2. MTV sends the data over a network to OpenShift Virtualization.
  3. OpenShift Virtualization writes the data to its storage.

    This method can be slow and consume significant network and host resources.

With storage copy offload, the process is streamlined:

  1. MTV initiates a disk transfer request.
  2. Instead of sending the data, MTV instructs the storage array in which the vSphere Virtual Machine File System (VMFS) datastore holds the source VMs to perform a direct copy from the source storage to the target volume, on the same array, in the correct storage class.

    The storage array handles the cloning of the VM disk internally, often at a much higher speed than a network-based transfer.

The Forklift project, a key component of MTV, includes a specialized volume populator named vsphere-xcopy-volume-populator that directly interacts with VMware’s VAAI. This allows MTV to trigger the high-speed, array-level data copy operation for supported storage systems.

Important

The storage arrays must be the ones specified above. Otherwise, XCOPY performs a fallback network disk copy on the ESXi. Although a fallback network disk copy on the ESXi is usually considerably faster than a standard migration using a VDDK image over the network, it is not as quick as a properly configured storage copy offload migration.

10.5.2. Supported storage providers

The following storage providers support storage copy offload:

  • Hitachi Vantara
  • NetApp ONTAP
  • Pure Storage FlashArray
  • Dell PowerMax
  • Dell PowerFlex
  • Dell PowerStore
  • HPE 3PAR
  • HPE Primera
  • Infinidat InfiniBox
  • IBM Flashsystem

You need to perform the following steps when you plan and run storage copy offload migrations:

Procedure

  1. Before your first migration, choose and implement a cloning method. This step is discussed in Cloning methods used by storage copy offload and in the sections that follow it.
  2. For each migration, follow the procedure in either Migrating VMware vSphere VMs in the UI by using storage copy offload or Migrating VMware vSphere VMs in the CLI by using storage copy offload.
  3. If you encounter problems that are specific to storage copy offload, consult Troubleshooting storage copy offload.

You can use either of the following two cloning (copying) methods to run storage copy offload migrations: vSphere Installation on Bundle (VIB) or SSH. Both use the volume populator named vsphere-xcopy-volume-populator to perform vmkfstools clone operations on ESXi hosts.

vSphere Installation on Bundle (VIB) is the default method. This method uses a custom VIB installed on ESXi hosts to expose vmkfstools operations via the vSphere API.

SSH is the recommended method. This method uses SSH to directly run vmkfstools commands on ESXi hosts. This method is useful when VIB installation is not possible and for the advantages that follow.

10.5.4.1. Advantages of the SSH method

The SSH method offers you the following advantages:

  • No VIB installation: Does not require custom VIB deployment on ESXi hosts
  • Standard SSH: Uses the standard ESXi SSH service with no custom components
  • Security: Uses secure key-based authentication with command restrictions
  • Compatibility: Works with any ESXi version that supports SSH
  • Flexibility: Easier to troubleshoot and monitor SSH connections

You can set up storage copy offload using the vSphere Installation on Bundle (VIB). This is the default method for running storage copy offload migrations.

Important

If you use this method, you must install the VIB on every ESXi host that you use for copy-offload operations.

Prerequisites

  • Podman or Docker installed on your local machine.
  • Root user SSH access to ESXi hosts.
  • SSH private key for ESXi authentication. This can be the same key used for the SSH clone method, if you use both.
  • Optional: vSphere credentials. These allow you to auto-discovery ESXi hosts.

Procedure

  1. Configure the VIB clone method in your Provider CR by setting settings:esxiCloneMethod to "vib".

    Example Provider CR

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: my-vsphere-provider
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: https://vcenter.example.com
      secret:
        name: vsphere-credentials
        namespace: openshift-mtv
      settings:
        esxiCloneMethod: "vib"

  2. Install the VIB by using the vib-installer utility included in the container image by running one of the following methods:

    1. Auto-discover ESXi hosts from vSphere and install the VIB by running the following commands:

      $ podman run -it --rm \
        --entrypoint /bin/vib-installer \
        -v $HOME/.ssh/id_rsa:/tmp/esxi_key:Z \
        -e GOVMOMI_USERNAME=administrator@vsphere.local \
        -e GOVMOMI_PASSWORD=your-password \
        -e GOVMOMI_HOSTNAME=vcenter.example.com \
        -e GOVMOMI_INSECURE=true \
        $(oc get deployment forklift-volume-populator-controller  -n openshift-mtv -o jsonpath={.spec.template.spec.containers[0].env[?(@.name == "VSPHERE_XCOPY_VOLUME_POPULATOR_IMAGE")].value}) \
        --ssh-key-file /tmp/esxi_key \
        --datacenter MyDatacenter
    2. Or specify ESXi hosts manually and install the VIB by running the following commands:

      $ podman run -it --rm \
        --entrypoint /bin/vib-installer \
        -v $HOME/.ssh/id_rsa:/tmp/esxi_key:Z \
        $(oc get deployment forklift-volume-populator-controller  -n openshift-mtv -o jsonpath={.spec.template.spec.containers[0].env[?(@.name == "VSPHERE_XCOPY_VOLUME_POPULATOR_IMAGE")].value}) \
        --ssh-key-file /tmp/esxi_key \
        --esxi-hosts esxi1.example.com,esxi2.example.com,esxi3.example.com
      Note

      Run vib-installer --help for a list of all available flags. Flags match the main populator naming conventions and support environment variables such as SSH_KEY_FILE, ESXI_HOSTS, and GOVMOMI_USERNAME.

      Note

      For alternative VIB installation methods using Ansible, see Esxcli plugin that wraps vmkfstools.

You can use either an automatically generated SSH key or a manually generated SSH key for your storage copy offload migrations.

Although SSH keys are automatically generated when you choose to use the SSH method, you have the option to generate manual SSH keys.

Procedures for both options are given in the sections that follow.

  • All public keys must include command restrictions for security.
  • The command path in the restrictions must match the secure script path: /vmfs/volumes/{datastore-name}/secure-vmkfstools-wrapper.py.
  • You must install the SSH key in each ESXi host in your migration environment.
  • SSH service must be enabled on all target ESXi hosts.
  • To support ESXi access control, commands are restricted to vmkfstools operations.
10.5.4.3.2. Security recommendations

It is recommended to follow the following security recommendations:

  • Use separate key pairs for different environments.
  • Rotate keys periodically.
  • Consider using shorter-lived keys for enhanced security.

By default, when you use the SSH method for setting up storage copy migrations, SSH keys are automatically generated when you create or update a relevant vSphere provider.

These keys have the following characteristics:

  • 2048-bit RSA keys
  • Stored in separate Kubernetes Secrets in the provider’s namespace
  • Automatically injected into migration pods as needed

SSH keys are stored in secrets with predictable names based on the name of your vSphere provider:

Expand
Table 10.1. Patterns of SSH secret names
Secret typeNaming patternContains

Private key

offload-ssh-keys-{provider-name}-private

private-key: RSA private key in PEM format

Public key

offload-ssh-keys-{provider-name}-public

`public-key: SSH public key in authorized_keys format

Example: For a provider with the name vcenter-example, the secrets would be offload-ssh-keys-vcenter-example-private and offload-ssh-keys-vcenter-example-public.

Prerequisites

  • Ensure that SSH traffic is permitted from the OpenShift Virtualization network to the ESXis.

Procedure

  1. Configure the SSH clone method in your Provider CR by setting settings:esxiCloneMethod to "ssh".

    Example Provider CR

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: my-vsphere-provider
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: https://vcenter.example.com
      secret:
        name: vsphere-credentials
        namespace: openshift-mtv
      settings:
        esxiCloneMethod: "ssh"

  2. Find the SSH secrets for your vSphere Provider by running one of the following commands:

    1. List all SSH key secrets in the provider’s namespace by running the following command:

      $ oc get secrets -l app.kubernetes.io/component=ssh-keys -n openshift-mtv
    2. View a specific private or public key secret by running the following command:

      $ oc get secret <name_of_private_or_public_key> -o yaml -n openshift-mtv
  3. Optional: If needed, you can replace an auto-generated key pair by running the following command:

    $ ssh-keygen -t rsa -b 4096 -f custom_esxi_key -N ""

    This is a simpler procedure than manually generating the key pair, as described in TBD.

  4. Optional: If needed, you can replace either a private key secret or a public key secret by running one of the following commands:

    1. Replace a private key secret by running the following command:

      $ oc create secret generic <name_of_private_key> \
        --from-file=private-key=custom_esxi_key \
        --dry-run=client -o yaml | oc replace -f - -n openshift-mtv
    2. Replace a public key secret by running the following command:

      $ oc create secret generic <name of public key> \
        --from-file=public-key=custom_esxi_key.pub \
        --dry-run=client -o yaml | oc replace -f - -n openshift-mtv
  5. Optional: Configure the SSH timeout by adding it to your provider secret, which is the main storage credentials secret, by running the following command:

    $ oc patch secret <provider_credentials> -p {"data":{"SSH_TIMEOUT_SECONDS":"$(echo -n "60" | base64)"}} -n <provider_namespace>

You can manually generate restricted SSH keys to use for storage copy offload migrations. After you generate the keys, you can then add the public key to your ESXi hosts.

Prerequisites

  • Ensure that SSH traffic is permitted from the OpenShift Virtualization network to the ESXis.
  • Ensure you have network access from your local machine to the ESXi host.

Procedure

  1. Configure the SSH clone method in your Provider CR by setting settings:esxiCloneMethod to "ssh".

    Example Provider CR

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: my-vsphere-provider
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: https://vcenter.example.com
      secret:
        name: vsphere-credentials
        namespace: openshift-mtv
      settings:
        esxiCloneMethod: "ssh"

  2. Get the public key from the auto-generated secret by performing the following steps:
$ oc get secrets -l app.kubernetes.io/component=ssh-keys -n <namespace_with_key>
  1. Extract the public key you want by running the following command:

    $ oc get secret <your_public_key> \
      -o jsonpath='{.data.public-key}' -n <namespace_with_key> | base64 -d > esxi_public_key.pub
  2. View the public key by running the following command:

    $ cat esxi_public_key.pub
    1. Prepare the restricted key entry by performing the following steps:
  3. Prefix the public key with command restrictions by running the following command:

    $ echo 'command="python /vmfs/volumes/<datastore_name>/secure-vmkfstools-wrapper.sh",no-port-forwarding,no-agent-forwarding,no-X11-forwarding '$(cat esxi_public_key.pub) > restricted_key.pub

    The command runs a script that adds the prefix.

  4. View the final restricted key by running the following command:

    $ cat restricted_key.pub
    1. Install the restricted key on the ESXI host directly by running the following command:

      $ cat restricted_key.pub | ssh root@<your_ESXi_host_IP> \
        'cat >> /etc/ssh/keys-root/authorized_keys'
    2. Verify the installation by performing the following steps:
  5. Extract the private key from the Secret by running the following command:

    $ oc get secret <your_private_key> \
      -o jsonpath='{.data.private-key}' -n <namespace_with_key> | base64 -d > esxi_private_key
  6. Set the permissions of your private key by running the following command:

    $ chmod 600 esxi_private_key
  7. Test the connection by running the following command:

    $ ssh -i esxi_private_key root@<your_ESXi_host_IP>

    If the installation was successful, you are connected to the ESXi host with restricted commands.

  8. Run a test command that is restricted to the secure script to verify the connection.

    1. Clean up the local key files by running the following command:

      $ rm -f esxi_public_key.pub restricted_key.pub esxi_private_key

You can use the storage copy offload feature of Migration Toolkit for Virtualization (MTV) to migrate VMware vSphere virtual machines (VMs) faster than by other methods.

Prerequisites

In addition to the regular VMware prerequisites, storage copy offload has the following additional prerequisites:

  • One of the following storage systems, configured:

    • Hitachi Vantara
    • NetApp ONTAP
    • Pure Storage FlashArray
    • Dell PowerMax
    • Dell PowerFlex
    • Dell PowerStore
    • HPE 3PAR or HPE Primera
    • Infinidat InfiniBox
    • IBM Flashsystem
  • A working Container Storage Interface (CSI) driver connected to the above and to OpenShift Virtualization
  • A configured VMware vSphere provider
  • vSphere users must have a role that includes the following privileges (suggested name: StorageOffloader):

    • Global

      • Settings
    • Datastore

      • Browse datastore
      • Low level file operations
    • Host Configuration

      • Advanced settings
      • Query patch
      • Storage partition configuration
  • It is strongly recommended to delete any preexisting snapshots before you start a storage copy offload migration, especially performance-sensitive migrations. Pre-existing snapshots prevent use of XCOPY and force vmkfstools to use a slower software-based cloning method.
Important

Snapshots created by a warm migration do not affect storage copy offload migrations. Only preexisting snapshots cause this issue.

Procedure

  1. In the MTV Operator, set the value of feature_copy_offload to true in forklift-controller by running the following command:

    $ oc patch forkliftcontrollers.forklift.konveyor.io forklift-controller --type merge -p {"spec": {"feature_copy_offload": "true"}} -n openshift-mtv
  2. Create a Secret in the namespace in which the migration provider is set up, usually openshift-mtv. Include the credentials from the appropriate vendor in your Secret.

    Expand
    Table 10.2. Credentials for a Hitachi storage copy offload Secret
    KeyDescriptionMandatory?Default

    GOVMOMI_HOSTNAME

    hostname or URL of the vSphere API (string).

    Yes.

    N/A.

    GOVMOMI_USERNAME

    User name of the vSphere API (string).

    Yes.

    N/A.

    GOVMOMI_PASSWORD

    Password of the vSphere API (string).

    Yes.

    N/A.

    STORAGE_HOSTNAME

    The hostname or URL of the storage vendor API (string).

    Yes.

    N/A.

    STORAGE_USERNAME

    The username of the storage vendor API (string).

    Yes.

    N/A.

    STORAGE_PASSWORD

    The password of the storage vendor API (string).

    Yes.

    N/A.

    STORAGE_PORT

    The port of the storage vendor API (string).

    Yes.

    N/A.

    STORAGE_ID

    Storage array serial number (string).

    Yes.

    N/A.

    HOSTGROUP_ID_LIST

    List of IO ports and host group IDs, for example. CL1-A,1:CL2-B,2:CL4-A,1:CL6-A,1.

    Yes.

    N/A.

    Expand
    Table 10.3. Credentials for a NetApp ONTAP storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string). Either enter the management IP for the entire cluster or enter a dedicated storage virtual machine management logical interface (SVM LIF).

    Yes.

    N/A.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    N/A.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    N/A.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false.

    ONTAP_SVM

    The storage virtual machine (SVM) to be used in all client interactions. It can be taken from trident.netapp.io/v1/TridentBackend.config.ontap_config.svm resource field.

    Yes.

    N/A.

    Expand
    Table 10.4. Credentials for a Pure FlashArray storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string).

    Yes.

    N/A.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    N/A.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    N/A.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false

    PURE_CLUSTER_PREFIX

    The cluster prefix is set in the StorageCluster resource. Retrieve it by running printf "px_%.8s" $(oc get storagecluster -A -o=jsonpath='{.items[?(@.spec.cloudStorage.provider=="pure")].status.clusterUid}') in the CLI.

    Yes.

    N/A.

    Expand
    Table 10.5. Credentials for a Dell PowerMax storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string).

    Yes.

    N/A.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    N/A.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    N/A.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No..

    false

    POWERMAX_SYMMETRIX_ID

    The Symmetrix ID of the storage array. Can be taken from the config map under the powermax namespace, which the CSI driver uses.

    Yes.

    N/A.

    POWERMAX_PORT_GROUP_NAME

    The port group to use for masking view creation.

    Yes.

    N/A.

    Expand
    Table 10.6. Credentials for a Dell PowerFlex storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string).

    Yes.

    N/A.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    N/A.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    N/A.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false.

    POWERFLEX_SYSTEM_ID

    The system ID of the storage array. Can be taken from vxflexos-config` from the vxflexos` namespace or from the openshift-operators namespace.

    Yes.

    N/A.

    Expand
    Table 10.7. Credentials for a Dell PowerStore storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string)

    Yes.

    N/A.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    N/A.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    N/A.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false.

    Expand
    Table 10.8. Credentials for an HPE 3PAR or HPE Primera storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    Must include the full URL with protocol. For HPE 3PAR, must also include Web Services API (WSAPI) port. Use the HPE 3PAR command cli% showwsapi to determine the correct WSAPI port. HPE 3PAR systems default to port 8080 for both HTTP and HTTPS connections, HPE Primera defaults to port 443 (SSL/HTTPS). Depending on configured certificates, you might need to skip SSL verification. Example: https://192.168.1.1:8080.

    Yes

    N/A

    STORAGE_USERNAME

    The user’s name (string)

    Yes

    N/A

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    N/A

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

    Expand
    Table 10.9. Credentials for an Infinidat InfiniBox storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string)

    Yes

    N/A

    STORAGE_USERNAME

    The user’s name (string)

    Yes

    N/A

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    N/A

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

    Expand
    Table 10.10. Credentials for an IBM FlashSystem storage copy offload Secret
    KeyDescriptionMandatory?Default

    STORAGE_HOSTNAME

    IP or URL of the host (string)

    Yes

    N/A

    STORAGE_USERNAME

    The user’s name (string)

    Yes

    N/A

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    N/A +

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

  3. In the UI complete the following steps:

    1. Create an ownerless storage map by using the procedure in Creating ownerless storage maps using the form page of the MTV UI. Use the Offload plugin named vSphere XCOPY.
    2. Create a migration plan by using the procedure in Creating a VMware vSphere migration plan by using the MTV wizard.

10.6. Adding a VMware vSphere source provider

You can migrate VMware vSphere VMs from VMware vCenter or from a VMware ESX/ESXi server without going through vCenter.

Considerations
  • EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Migration Toolkit for Virtualization but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines.
  • Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration.
  • MTV does not support migrating VMware Non-Volatile Memory Express (NVMe) disks.
  • If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a VMware vSphere migration plan using the MTV wizard.

Prerequisites

  • It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, retry with VDDK installed. For more information, see Creating a VDDK image.
Warning

Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.

Procedure

  1. Access the Create provider page for VMware by doing one of the following:

    1. In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.
      2. Select a Project from the list. The default project shown depends on the active project of MTV.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click VMware.
    2. If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click VMware.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click VMware when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of MTV.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    1. Provider details

      • Provider resource name: Name of the source provider.
      • Endpoint type: Select the vSphere provider endpoint type. Options: vCenter or ESXi. You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter.
      • URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the sdk path, usually /sdk. For example, https://vCenter-host-example.com/sdk. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate.
      • VDDK init image: VDDKInitImage path. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.

        Do one of the following:

        • Select the Skip VMWare Virtual Disk Development Kit (VDDK) SDK acceleration (not recommended).
        • Enter the path in the VDDK init image text box. Format: <registry_route_or_server_path>/vddk:<tag>.
        • Upload a VDDK archive and build a VDDK init image from the archive by doing the following:

          • Click Browse next to the VDDK init image archive text box, select the desired file, and click Select.
          • Click Upload.

            The URL of the uploaded archive is displayed in the VDDK init image archive text box.

    2. Provider credentials

      • Username: vCenter user or ESXi user. For example, user@vsphere.local.
      • Password: vCenter user password or ESXi user password.
  1. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.
    • Use the system CA certificate: Migrate after validating the system CA certificate.
    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.
  2. Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.
    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  3. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

    Note

    It might take a few minutes for the provider to have the status Ready.

  4. Optional: Add access to the UI of the provider:

    1. On the Providers page, click the provider.

      The Provider details page opens.

    2. Click the Edit icon under External UI web link.
    3. Enter the link and click Save.

      Note

      If you do not enter a link, MTV attempts to calculate the correct link.

      • If MTV succeeds, the hyperlink of the field points to the calculated link.
      • If MTV does not succeed, the field remains empty.

You can select a migration network in the Red Hat OpenShift web console for a source provider to reduce risk to the source environment and to improve performance.

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

Note

If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a migration plan.

Prerequisites

  • The migration network must have enough throughput, minimum speed of 10 Gbps, for disk transfer.
  • The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway.

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

  • The target namespace must have network connectivity to the VMware source environment.

    Migration pods run in the target namespace and require outbound access to the VMware API and ESXi hosts. If you use NetworkPolicies to restrict egress connections from the target namespace, you must configure policies that allow connections to VMware. This requirement applies whether you use the pod network, user-defined networks (UDNs), or cluster user-defined networks (CUDNs) in the target namespace.

  • The migration network should have jumbo frames enabled.

Procedure

  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
  2. Click the host number in the Hosts column beside a provider to view a list of hosts.
  3. Select one or more hosts and click Select migration network.
  4. Specify the following fields:

    • Network: Network name
    • ESXi host admin username: For example, root
    • ESXi host admin password: Password
  5. Click Save.
  6. Verify that the status of each host is Ready.

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

Use a Red Hat OpenShift Virtualization provider as both source and destination provider. You can migrate VMs from the cluster that Migration Toolkit for Virtualization (MTV) is deployed on to another cluster or from a remote cluster to the cluster that MTV is deployed on.

Prerequisites

Procedure

  1. Access the Create OpenShift Virtualization provider interface by doing one of the following:

    1. In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.
      2. Select a Project from the list. The default project shown depends on the active project of MTV.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click OpenShift Virtualization.
    2. If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click OpenShift Virtualization.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of MTV.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider
    • URL: URL of the endpoint of the API server
    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OpenShift cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.
    • Use the system CA certificate: Migrate after validating the system CA certificate.
    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.
  4. Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.
    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.

In MTV version 2.9 and earlier, MTV used the pod network as the default network.

In version 2.10.0 and later, MTV detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.

Considerations
  • MTV supports using UDNs for all providers except OpenShift Virtualization.
  • You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure

  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
  2. Click the OpenShift Virtualization provider whose migration network you want to change.

    When the Provider details page opens:

  3. Click the Networks tab.
  4. Click Set default transfer network.
  5. Select a default transfer network from the list and click Save.
  6. Configure a gateway in the network used for MTV migrations by completing the following steps:

    1. In the Red Hat OpenShift web console, click Networking > NetworkAttachmentDefinitions.
    2. Select the appropriate default transfer network NAD.
    3. Click the YAML tab.
    4. Add forklift.konveyor.io/route to the metadata:annotations section of the YAML, as in the following example:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: localnet-network
        namespace: mtv-test
        annotations:
          forklift.konveyor.io/route: <IP address>
      • The NetworkAttachmentDefinition parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.
    5. Click Save.

You can migrate VMware vSphere virtual machines (VMs) from VMware vCenter or from a VMware ESX or ESXi server by using the Migration Toolkit for Virtualization plan creation wizard.

The wizard is designed to lead you step-by-step in creating a migration plan.

Limitations
  • Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to.
  • A plan cannot contain more than 500 VMs or 500 disks.
Warning

Migration Toolkit for Virtualization (MTV) cannot migrate VMware vSphere 6 and VMware vSphere 7 VMs to a FIPS-compliant OpenShift Virtualization cluster.

Prerequisites

  • Have a VMware source provider and an OpenShift Virtualization destination provider. For more information, see Adding a VMware vSphere source provider or Adding an OpenShift Virtualization destination provider.
  • If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.
  • If you are using a user-defined network (UDN), note the name of its namespace as defined in OpenShift Virtualization.

Procedure

  1. On the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.
    • Plan project: Select from the list.
    • Source provider: Select from the list.
    • Target provider: Select from the list.
    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.
      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.
        2. Optional: Enter a Display name for the project.
        3. Optional: Enter a Description of the project.
        4. Click Create project.
  4. Click Next.
  5. On the Virtual machines page, select the virtual machines you want to migrate and click Next.
  6. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.
  7. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      Note

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      Note

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.
      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let MTV automatically generate a name for the network map.
  8. Click Next.
  9. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      Note

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      Note

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.
      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
  10. Click Next.
  11. On the Migration type page, choose one of the following:

    • Cold migration (default)
    • Warm migration
  12. Click Next.
  13. On the Other settings (optional) page, specify any of the following settings that are appropriate for your plan. All are optional.

    • Disk decryption passphrases: For disks encrypted using Linux Unified Key Setup (LUKS).

      • Enter a decryption passphrase for a LUKS-encrypted device.
      • To add another passphrase, click Add passphrase and add a passphrase.
      • Repeat as needed.

        You do not need to enter the passphrases in a specific order. For each LUKS-encrypted device, MTV tries each passphrase until one unlocks the device.

    • Transfer Network: The network used to transfer the VMs to OpenShift Virtualization. This is the default transfer network of the provider.

      • Verify that the transfer network is in the selected target project.
      • To choose a different transfer network, select a different transfer network from the list.
      • Optional: To configure another OpenShift network in the OpenShift web console, click Networking > NetworkAttachmentDefinitions.

        To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.

      • To adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
    • Preserve static IPs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP during migration.

      • To preserve static IPs, select the Preserve the static IPs checkbox.

        MTV then issues a warning message about any VMs whose vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere. This causes the vNIC properties to be reported to MTV.

    • Root device: Applies to multi-boot VM migrations only. By default, MTV uses the first bootable device detected as the root device.

      • To specify a different root device, enter it in the text box.

        MTV uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

        If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by checking the conversion pod logs.

    • Shared disks: Applies to cold migrations only. Shared disks are disks that are attached to multiple VMs and that use the multi-writer option. These characteristics make shared disks difficult to migrate. By default, MTV migrates shared disks.

      Note

      Migrating shared disks might slow down the migration process.

      • To migrate shared disks in the migration plan, verify that Shared disks is selected in the checkbox.
      • To avoid migrating shared disks, clear the Shared disks checkbox.
  14. Click Next.
  15. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
  16. To add a hook, select the appropriate Enable hook checkbox.
  17. Enter the Hook runner image.
  18. Enter the Ansible playbook of the hook in the window.

    Note

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  19. Click Next.
  20. On the Review and create page, review the information displayed.
  21. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.
    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
  22. When you finish reviewing the details of the plan, click Create plan.

    MTV validates your plan. When your plan is validated, the Plan details page for your plan opens. If everything is OK, the Plan details page for your plan opens. This page contains important settings that do not appear in the wizard.

Next steps

10.11. Configuring VMware migration plan settings

After you create a migration plan by using the Migration Toolkit for Virtualization wizard, the Plan details page opens. This page contains important settings that do not appear in the wizard but can affect your migration. You can configure these settings immediately after creating the plan or return to configure them later before running the plan.

Prerequisites

Procedure

  1. On the Plan details page for your plan, review the Plan settings section.

    The Plan settings section includes settings that you specified in the Other settings (optional) page of the wizard and some additional optional settings. The steps below refer to the additional optional settings, but all of the settings can be edited by clicking the Options menu kebab , making the change, and then clicking Save.

  2. Check the following items in the Plan settings section of the page:

    1. Volume name template: Specifies a template for the volume interface name for the VMs in your plan.

      For a list of available variables and examples, see Volume name template variables.

      • To specify a volume name template for all the VMs in your plan, do the following:

        • Click the Edit icon.
        • Click Enter custom naming template.
        • Enter the template according to the instructions. Be sure that your template generates volume names that are DNS-compliant.
        • Click Save.
      • To specify a volume name template only for specific VMs, do the following:

        • Click the Virtual Machines tab.
        • Select the desired VMs.
        • Click the Options menu kebab of the VM.
        • Select Edit Volume name template.
        • Enter the template according to the instructions. Be sure that your template generates volume names that are DNS-compliant.
        • Click Save.
    2. PVC name template: Specifies a template for the name of the persistent volume claim (PVC) for the VMs in your plan.

      For a list of available variables, template functions, and examples, see PVC name template variables.

      • To specify a PVC name template for all the VMs in your plan, do the following:

        • Click the Edit icon.
        • Click Enter custom naming template.
        • Enter the template according to the instructions. Be sure that your template generates PVC names that are DNS-compliant.
        • Click Save.
      • To specify a PVC name template only for specific VMs, do the following:

        • Click the Virtual Machines tab.
        • Select the desired VMs.
        • Click the Options menu kebab of the VM.
        • Select Edit PVC name template.
        • Enter the template according to the instructions. Be sure that your template generates PVC names that are DNS-compliant.
        • Click Save.
    3. Network name template: Specifies a template for the network interface name for the VMs in your plan.

      For a list of available variables and examples, see Network name template variables.

      • To specify a network name template for all the VMs in your plan, do the following:

        • Click the Edit icon.
        • Click Enter custom naming template.
        • Enter the template according to the instructions. Be sure that your template generates network interface names that are DNS-compliant.
        • Click Save.
      • To specify a network name template only for specific VMs, do the following:

        • Click the Virtual Machines tab.
        • Select the desired VMs.
        • Click the Options menu kebab of the VM.
        • Select Edit Network name template.
        • Enter the template according to the instructions. Be sure that your template generates network interface names that are DNS-compliant.
        • Click Save.

          Important

          Changes you make on the Virtual Machines tab override any changes on the Plan details page.

          Important

          You must ensure that the templates you enter on the Plan details page generate DNS-compliant names. If the template syntax is invalid or if the resulting names are not DNS-compliant, MTV adds an error to the plan conditions and prevents the migration from running. To help ensure valid names, you might want to run a Go script that uses the sprig methods that MTV supports. For tables documenting the methods that MTV supports, see MTV template utility for VMware.

    4. Raw copy mode: By default, during migration, virtual machines (VMs) are converted using a tool named virt-v2v that makes them compatible with OpenShift Virtualization. For more information about the virt-v2v conversion process, see How MTV uses the virt-v2v tool. Raw copy mode copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on OpenShift Virtualization.

      • To use raw copy mode for your migration plan, do the following:

        • Click the Edit icon.
        • Toggle the Raw copy mode switch to enable it.
        • Optional: Configure the Use compatibility mode setting:

          When you enable Use compatibility mode (default), MTV uses compatibility devices (SATA bus, E1000E NICs, USB) to ensure the VM can boot on OpenShift Virtualization.

          When you disable Use compatibility mode, MTV uses pre-installed VirtIO devices on the source VM for better performance.

          Important

          Only disable Use compatibility mode if VirtIO drivers are already installed in the source VM. VMs without pre-installed VirtIO drivers do not boot on OpenShift Virtualization if you disable compatibility mode.

        • Click Save.
    5. VM target node selector, VM target labels, and VM target affinity rules are options that support VM target scheduling, a feature that lets you direct MTV to migrate virtual machines (VMs) to specific nodes or workloads (pods) of OpenShift Virtualization as well as to schedule when to power on the VMs.

      For more information on the feature in general, see Target VM scheduling options.

      For more details on using the feature with the UI, see Scheduling target VMs from the user interface.

      • VM target node selector allows you to create mandatory exact match key-value label pairs that the target node must possess. If no node on the cluster has all the labels specified, the VM is not scheduled and it remains in a Pending state until there is space on a node that fits these key-value label pairs.

        • To use the VM target node selector for your migration plan, do the following:

          • Click the Edit icon.
          • Enter a key-value label pair. For example, to require that all VMs in the plan be migrated to your east data center, enter dataCenter as your key and east as your label.
          • To add another key-value label pair, click + and enter another key-value pair.
          • Click Save.
      • VM target labels allows you to apply organizational or operational labels to migrated VMs for identification and management. One use for these labels is to use them to specify a different scheduler for your migrated VMs, by creating a specific target VM label for it.

        • To use the VM target node selector for your migration plan, do the following:

          • Click the Edit icon.
          • Enter one or more VM target labels.
          • Click Save.
      • VM target affinity rules: Target affinity rules let you use conditions to either require or prefer scheduling on specific nodes or workloads (pods).

        Target anti-affinity rules let you prevent VMs from being scheduled to run on selected workloads (pods) or prefer that they are not scheduled. These kind of rules offer more flexible placement control than rigid Node Selector rules, because they support conditionals such as In, or NotIn.

        For example, you could require that a VM be powered on "only if it is migrated to node A or if it is migrated to an SSD disk, but it cannot be migrated to a node for which license-tier=silver is true."

        Additionally, both target affinity and target anti-affinity rules allow you to include both hard and soft conditions in the same rule. A hard condition is a requirement, and a soft condition is a preference. The previous example used only hard conditions. A rule that states that "a VM can be powered on if it is migrated to node A or if it is migrated to an SSD disk, but it is preferred not to migrate it to a node for which license-tier=silver is true" is an example of a rule that uses soft conditions.

        MTV supports target affinity rules at both the node level and the workload (pod) level. It supports anti-affinity rules at the workload (pod) level only.

        • To use VM target affinity rules for your migration plan, do the following:

          • Click the Edit icon.
          • Click Add affinity rule.
          • Select the Type of affinity rule from the list. Valid options: Node Affinity, Workload (pod) Affinity, Workload (pod) Anti-Affinity.
          • Select the Condition from the list. Valid options: Preferred during scheduling (soft condition), Required during scheduling (hard condition).
          • Soft condition only: Enter a numerical Weight. The higher the weight, the stronger the preference. Valid options: whole numbers from 1-100.
          • Enter a Typology key, the key for the node label that the system uses to denote the domain.
          • Optional: Select the Workload labels that you want to set by doing the following:

            • Enter a Key.
            • Select an Operator from the list. Valid options: Exists, DoesNotExist, In, and NotIn.
            • Enter a Value.
          • To add another label, click Add expression and add another key-value pair with an operator.
          • Click Save affinity rule.
          • To add another affinity rule, click Add affinity rule. Rules with a preferred condition will stack with an AND relation between them. Rules with a required condition will stack with an OR relation between them.

            MTV validates any changes you made on this page.

  3. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan
    • Conditions: Any changes that need to be made to the plan so that it can run successfully
  4. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Expand
    Table 10.11. Tabs of the Plan details page
    YAMLVirtual MachinesResourcesMappingsHooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any

10.11.1. PVC name template variables

When creating a migration plan, you can use template variables to customize the names of persistent volume claims (PVCs) for migrated virtual machine disks. The template follows Go template syntax.

You must ensure that PVC names are DNS-compliant: they must consist of lowercase alphanumeric characters or hyphens (-), start with an alphanumeric character, and not exceed 63 characters. If the template syntax is invalid or if the resulting PVC names are not DNS-compliant, MTV adds an error to the plan conditions and prevents the migration from running.

10.11.1.1. Available variables

The pvcNameTemplate field has access to the following variables:

  • .VmName: Name of the VM in the source cluster.
  • .TargetVmName: Final VM name in the target cluster. May equal .VmName if no rename or normalization occurs.
  • .PlanName: Name of the migration plan.
  • .DiskIndex: Sequential index of the disk (0-based).
  • .RootDiskIndex: Index value that identifies the root disk.
  • .Shared: Indicates whether the volume is shared. Options: true for a shared volume, false for a non-shared volume.
  • .WinDriveLetter: Windows drive letter in lowercase (for example, c, d). Only applicable to Windows guests.

    This variable requires the QEMU Guest Agent (or VMware Tools) to be active and reachable on the source VM. It also requires a disk key mapping between the guest operating system and the hardware disk. This mapping may be missing for some disk configurations, particularly IDE disks. When the mapping is missing (for example, on Linux VMs or when the Windows drive letter is not explicitly set), the variable resolves to an empty string. If this results in an invalid PVC name, MTV adds an error to the plan conditions and prevents the migration from running.

  • .FileName: Name of the virtual disk file (for example, vm-disk1.vmdk), not the full datastore path. VMware only.

10.11.1.2. Template functions

You can use Go template functions to transform variable values into valid PVC names. Common functions include:

  • trimSuffix: Removes a suffix from a string. For example, ++{{++.FileName | trimSuffix ".vmdk"}} removes the .vmdk extension.
  • replace: Replaces all occurrences of a substring. For example, ++{{++.FileName | replace "_" "-"}} replaces underscores with hyphens.
  • lower: Converts a string to lowercase. For example, ++{{++.FileName | lower}}.

Functions can be chained together using the pipe (|) operator. Regular expression functions such as mustRegexReplaceAll are also available for advanced transformations.

10.11.1.3. Examples

  • "{{.VmName}}-disk-{{.DiskIndex}}" – Basic template with VM name and disk index
  • "{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}" – Conditional naming for root disk
  • "{{if .Shared}}shared-{{end}}{{.VmName}}-{{.DiskIndex}}" – Mark shared volumes
  • "pvc-{{.VmName}}-drive-{{.WinDriveLetter}}" – Include Windows drive letter (results in pvc-myvm-drive-c)
  • "{{.FileName | trimSuffix ".vmdk" | replace "_" "-" | replace "." "-" | lower}}-disk-{{.DiskIndex}}" – Transform filename to valid PVC name (transforms VM_Disk.1.vmdk to vm-disk-1-disk-0)

10.11.2. Network name template variables

When creating a migration plan, you can use template variables to customize the names of network interfaces for migrated virtual machines. The template follows Go template syntax.

You must ensure that network interface names are DNS-compliant: they must consist of lowercase alphanumeric characters or hyphens (-), start with an alphanumeric character, and not exceed 63 characters. If the template syntax is invalid or if the resulting network interface names are not DNS-compliant, MTV adds an error to the plan conditions and prevents the migration from running.

10.11.2.1. Available variables

The networkNameTemplate field has access to the following variables:

  • .NetworkName: Name of the Multus Network Attachment Definition if the target network is multus. Empty otherwise.
  • .NetworkNamespace: Namespace where the Multus Network Attachment Definition is located if the target network is multus. Empty otherwise.
  • .NetworkType: Specifies the network type. Options: multus or pod.
  • .NetworkIndex: Sequential index of the network interface (0-based).

10.11.2.2. Examples

  • "net-{{.NetworkIndex}}"
  • "{{if eq .NetworkType \"pod\"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"

10.11.3. Volume name template variables

When creating a migration plan, you can use template variables to customize the names of volume interfaces for migrated virtual machines. The template follows Go template syntax.

You must ensure that volume names are DNS-compliant: they must consist of lowercase alphanumeric characters or hyphens (-), start with an alphanumeric character, and not exceed 63 characters. If the template syntax is invalid or if the resulting volume names are not DNS-compliant, MTV adds an error to the plan conditions and prevents the migration from running.

10.11.3.1. Available variables

The volumeNameTemplate field has access to the following variables:

  • .PVCName: Name of the PVC mounted to the VM using this volume.
  • .VolumeIndex: Sequential index of the volume interface (0-based).

10.11.3.2. Examples

  • "disk-{{.VolumeIndex}}"
  • "pvc-{{.PVCName}}"

10.12. Migration of LUKS-encrypted disks

If you have virtual machines (VMs) with LUKS-encrypted disks in your source VMware VSphere environment, you can migrate them to Red Hat OpenShift Virtualization by enabling Network-Bound Disk Encryption (NBDE) with Clevis. Alternatively, you can manually add passwords for LUKS-encrypted devices in your migration plan.

If you enable NBDE, passwords are retrieved from the Clevis server. When you manually add LUKS passwords to your migration plan, you provide the list of passwords, and you cannot use NBDE to retrieve them. The two methods for migrating LUKS-encrypted disks are incompatible. You must use either NBDE or manual LUKS passwords to migrate LUKS-encrypted disks.

MTV transfers the data of VMs with LUKS-encrypted disks from the source environment to the target OpenShift Virtualization cluster. MTV reads only the blocks that are required for guest conversion, decrypting the required blocks and re-encrypting them locally with the same key.

Clevis is a client-side framework that automates the decryption of LUKS volumes by binding a LUKS key slot to a policy. During the migration of VMs with LUKS-encrypted disks, MTV authenticates with the configured network service by requesting the key to unlock the LUKS-encrypted disk from the Tang server. The automatic retrieval of the key allows the VM to boot without a manual passphrase entry from an administrator.

Benefits of NBDE
  • Automation: Eliminates the need to enter keys manually during migration.
  • Enhanced security: Maintains the security of VMs throughout their migration lifecycle by preserving LUKS encryption from the source to the destination.
  • Seamless operation: Ensures that VMs with encrypted disks can be brought online in the new OpenShift Virtualization environment with minimal interruption.

When you enable Network-Bound Disk Encryption (NBDE) with Clevis, the Tang server manages the keys for Linux Unified Key Setup (LUKS)-encrypted disks during a migration. If you do not use NBDE to migrate LUKS-encrypted disks from your source environment, you can manually add passwords for LUKS-encrypted devices instead. You must use either NBDE or manual LUKS passwords to migrate LUKS-encrypted disks.

You can enable NBDE with Clevis either in the MTV UI or in the YAML file for your migration plan:

  • In the MTV UI, you must select either NBDE with Clevis or LUKS passphrases. You can have only one encryption type, and you apply the setting to all VMs in your migration plan.
  • In the YAML file for your migration plan, you can combine encryption types and apply the setting to selected VMs in the YAML file.

Prerequisites

  • The Tang server is accessible from your OpenShift cluster and from the migration network.
  • You have a LUKS key slot bound to the Tang server policy.

    Note

    For MTV to access the keys from the Tang server, the keys must be on a different subnet range than a user-defined network (UDN).

Procedure

  1. Enable NBDE with Clevis in the MTV UI.

    1. In the Create migration plan wizard, navigate to Other settings under Additional setup in the left navigation pane.
    2. Select Use NBDE/Clevis.

      If you are not using NBDE with Clevis, you add passphrases for LUKS-encrypted devices so that the Tang servers can decrypt the disks during a migration.

    3. Click Next, and verify that Use NBDE/Clevis shows as Enabled under Other settings (optional).
    4. When you create your migration plan, click Migration plans in the left navigation menu, and open the Plan details page for your migration plan.
    5. Click the Edit icon for Disk decryption under Plan settings.
    6. Verify that Use network-bound disk encryption (NBDE/Clevis) is selected.

      If you are not using NBDE with Clevis, verify that the passphrases for LUKS-encrypted devices are added.

  2. Enable NBDE with Clevis in the YAML file.

    1. Click Migration plans in the left navigation menu and open the Plan details page for your migration plan.
    2. Click the YAML tab to open the Plan custom resource (CR) for your migration plan.
    3. For each VM under vms in the YAML file, enter the encryption type. In this example, you set nbdeClevis as the encryption type for vm-1, LUKS passphrase as the encryption type for vm-2, and no encryption type for vm-3:

      Example:

      vms:
        - id: vm-1
          name: vm-1-esx8.0-rhel8.10-raid1
          targetPowerState: on
          nbdeClevis: true
        - id: vm-2
          name: vm-2-esx8.0-rhel8.10-raid1
          luks: { name: 'test-secret-1' }
        - id: vm-3
          name: vm-3-esx8.0-rhel8.10-raid1

Troubleshooting

10.14. About scheduling importer pods

Migration Toolkit for Virtualization uses virt-v2v convertor pods, or importer pods, to transfer data from VMware source virtual machines (VMs) to target VMs.

By default, OpenShift Virtualization assigns the nodes to which these importer pods transfer data. However, for cold migrations from VMware VMs, you can schedule the destination nodes for the importer pods.

Important

Scheduling importer pods is supported only for cold migrations from VMware VMs. It is not supported for warm migrations from VMware or for migrations from other vendors.

In MTV 2.11, scheduling importer pods is available only for migrations from the command-line interface. You schedule the importer pods in the Plan CR, as described in step 8 of Running a VMware vSphere migration from the command-line.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat Documentation

Legal Notice

Theme

© 2026 Red Hat
Back to top