Questo contenuto non è disponibile nella lingua selezionata.

Installing and using the Migration Toolkit for Virtualization


Migration Toolkit for Virtualization 2.0

Migrating from VMware to Red Hat OpenShift Virtualization

Red Hat Modernization and Migration Documentation Team

Abstract

The Migration Toolkit for Virtualization (MTV) enables you to migrate virtual machines from VMware vSphere to OpenShift Virtualization running on OpenShift Container Platform.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. About the Migration Toolkit for Virtualization

The Migration Toolkit for Virtualization (MTV) enables you to migrate virtual machines from VMware vSphere to OpenShift Virtualization, an add-on to OpenShift Container Platform 4.7. With OpenShift Virtualization, you can run and manage virtual machine workloads alongside container workloads.

1.1. MTV custom resources and services

The Migration Toolkit for Virtualization (MTV) is provided as an OpenShift Container Platform Operator. It creates and manages the following custom resources (CRs) and services.

MTV custom resources

  • Provider CR stores attributes that enable MTV to connect to and interact with the source and target providers.
  • NetworkMapping CR maps the networks of the source and target providers.
  • StorageMapping CR maps the storage of the source and target providers.
  • Provisioner CR stores the configuration of the storage provisioners, such as supported volume and access modes.
  • Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.
  • Migration CR runs a migration plan.

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

MTV services

  • Provider Inventory service:

    • Connects to the source and target providers.
    • Maintains a local inventory for mappings and plans.
    • Stores VM configurations.
    • Runs the Validation service if a VM configuration change is detected.
  • Validation service checks the suitability of a VM for migration by applying rules.
Important

The Validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

  • User Interface service:

    • Enables you to create and configure MTV CRs.
    • Displays the status of the CRs and the progress of a migration.
  • Migration Controller service orchestrates migrations.

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller changes the plan status to Completed.

  • Virtual Machine Import Controller, Kubevirt Controller, and Containerized Data Import (CDI) Controller services handle most technical operations.

1.2. High-level migration workflow

The high-level workflow shows the migration process from the point of view of the user.

Figure 1.1. High-level workflow

The workflow describes the following steps:

  1. You create a source provider, a target provider, a network mapping, and a storage mapping.
  2. You create a migration plan that includes the following resources:

    • Source provider
    • Target provider
    • Network mapping
    • Storage mapping
    • One or more VMs
  3. You run a migration plan by creating a Migration CR that references the migration plan. If a migration is incomplete, you can run a migration plan multiple times until all VMs are migrated.
  4. For each VM in the migration plan, the Migration Controller creates a VirtualMachineImport CR and monitors its status. When all VMs have been migrated, the Migration Controller sets the status of the migration plan to Completed. The power state of a source VM is maintained after migration.

1.3. Detailed migration workflow

You can use the detailed migration workflow to troubleshoot a failed migration.

Figure 1.2. Detailed OpenShift Virtualization migration workflow

The workflow describes the following steps:

  1. When you run a migration plan, the Migration Controller creates a VirtualMachineImport custom resource (CR) for each source VM.
  2. The Virtual Machine Import Controller validates the VirtualMachineImport CR and generates a VirtualMachine CR.
  3. The Virtual Machine Import Controller retrieves the VM configuration, including network, storage, and metadata, linked in the VirtualMachineImport CR.



    For each VM disk:

  4. The Virtual Machine Import Controller creates a DataVolume CR as a wrapper for a Persistent Volume Claim (PVC) and annotations.


  5. The Containerized Data Importer (CDI) Controller creates a PVC. The Persistent Volume (PV) is dynamically provisioned by the StorageClass provisioner.


  6. The CDI Controller creates an Importer pod.
  7. The Importer pod connects to the VM disk by using the VMware Virtual Disk Development Kit (VDDK) SDK and streams the VM disk to the PV.

    After the VM disks are transferred:

  8. The Virtual Machine Import Controller creates a Conversion pod with the PVCs attached to it.

    The Conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

  9. The Virtual Machine Import Controller creates a VirtualMachineInstance CR.
  10. When the target VM is powered on, the KubeVirt Controller creates a VM pod.

    The VM pod runs QEMU-KVM with the PVCs attached as VM disks.

1.4. Storage support and default modes

The Migration Toolkit for Virtualization (MTV) supports OpenShift Virtualization storage features.

Note

If the OpenShift Virtualization storage does not support dynamic provisioning, MTV applies the default settings, Filesystem volume mode and ReadWriteOnce access mode. Filesystem volume mode is slower than Block volume mode. ReadWriteOnce access mode does not enable live virtual machine migration.

MTV uses the following default volume and access modes.

Expand
Table 1.1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

1.5. Warm migration

The default migration type for the Migration Toolkit for Virtualization (MTV) is cold migration. During cold migration, the virtual machines (VMs) are shut down while the data is copied.

Warm migration copies most of the data during the precopy stage. Then the VMs are shut down and the remaining data is copied during the cutover stage.

Precopy stage

The VMs are not shut down during the precopy stage.

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by patching the vm-import-controller-config config map.

Important

You must enable CBT on the source VMs and the VM disks.

A VM can support up to 28 CBT snapshots. If that limit is exceeded, a warm import retry limit reached error message is displayed. If the VM has preexisting CBT snapshots, it will reach this limit sooner.

The precopy stage runs until either the cutover stage starts or the maximum number of CBT snapshots is reached.

Cutover stage

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

You can start the cutover stage manually in the MTV console.

You can schedule a cutover time from the CLI by specifying the value of the cutover parameter in the Migration CR manifest.

Chapter 2. Installing the Migration Toolkit for Virtualization

You can install the Migration Toolkit for Virtualization (MTV) by using the OpenShift Container Platform web console or the command line interface (CLI).

Important

After you have installed MTV, you must create a VMware Virtual Disk Development Kit (VDDK) image and add it to the spec.vddkInitImage field of the HyperConverged custom resource (CR).

2.1. Installing the MTV Operator

You can install the MTV Operator by using the OpenShift Container Platform web console or the command line interface (CLI).

You can install the MTV Operator by using the OpenShift Container Platform web console.

Prerequisites

  • OpenShift Container Platform 4.7 installed.
  • OpenShift Virtualization Operator installed.
  • You must be logged in as a user with cluster-admin permissions.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field to search for mtv-operator.
  3. Click the MTV Operator and then click Install.
  4. On the Install Operator page, click Install.
  5. Click OperatorsInstalled Operators to verify that the MTV Operator appears in the openshift-mtv project with the status Succeeded.
  6. Click the MTV Operator.
  7. Under Provided APIs, locate the ForkliftController, and click Create Instance.
  8. Click Create.
  9. Click WorkloadsPods to verify that the MTV pods are running.
Obtaining the MTV web console URL

You can obtain the MTV web console URL by using the OpenShift Container Platform web console.

Prerequisites

  • OpenShift Virtualization Operator installed.
  • MTV Operator installed.
  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click NetworkingRoutes.
  3. Select the openshift-mtv project in the Project: list.
  4. Click the URL for the forklift-ui service to open the login page for the MTV web console.

2.1.2. Installing the MTV Operator from the command line interface

You can install the MTV Operator from the command line interface (CLI).

Prerequisites

  • OpenShift Container Platform 4.7 installed.
  • OpenShift Virtualization Operator installed.
  • You must be logged in as a user with cluster-admin permissions.

Procedure

  1. Create the openshift-mtv project:

    $ cat << EOF | oc apply -f -
    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: openshift-mtv
    EOF
    Copy to Clipboard Toggle word wrap
  2. Create an OperatorGroup CR called migration:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: migration
      namespace: openshift-mtv
    spec:
      targetNamespaces:
        - openshift-mtv
    EOF
    Copy to Clipboard Toggle word wrap
  3. Create a Subscription CR for the Operator:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: mtv-operator
      namespace: openshift-mtv
    spec:
      channel: release-v2.0.0
      installPlanApproval: Automatic
      name: mtv-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: "mtv-operator.v2.0.0"
    EOF
    Copy to Clipboard Toggle word wrap
  4. Create a ForkliftController CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: ForkliftController
    metadata:
      name: forklift-controller
      namespace: openshift-mtv
    spec:
      olm_managed: true
    EOF
    Copy to Clipboard Toggle word wrap
  5. Verify that the MTV pods are running:

    $ oc get pods -n openshift-mtv
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY  STATUS   RESTARTS  AGE
    forklift-controller-788bdb4c69-mw268  2/2    Running  0         2m
    forklift-operator-6bf45b8d8-qps9v     1/1    Running  0         5m
    forklift-ui-7cdf96d8f6-xnw5n          1/1    Running  0         2m
    Copy to Clipboard Toggle word wrap

Obtaining the MTV web console URL

You can obtain the MTV web console URL from the command line.

Prerequisites

  • OpenShift Virtualization Operator installed.
  • MTV Operator installed.
  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Obtain the MTV web console URL:

    $ oc get route virt -n openshift-mtv \
      -o custom-columns=:.spec.host
    Copy to Clipboard Toggle word wrap

    Example output

    https://virt-openshift-mtv.apps.cluster.openshift.com.
    Copy to Clipboard Toggle word wrap

  2. Launch a browser and navigate to the MTV web console.

2.2. Creating and using a VDDK image

The Migration Toolkit for Virtualization (MTV) uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

You can download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You then add the VDDK image to the spec.vddkInitImage field of the HyperConverged custom resource (CR).

Note

Storing the VDDK image in a public registry might violate the VMware license terms.

Prerequisites

Procedure

  1. Create and navigate to a temporary directory:

    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    Copy to Clipboard Toggle word wrap
  2. In a browser, navigate to the VMware VDDK download page.
  3. Select the latest VDDK version and click Download.
  4. Save the VDDK archive file in the temporary directory.
  5. Extract the VDDK archive:

    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    Copy to Clipboard Toggle word wrap
  6. Create a Dockerfile:

    $ cat > Dockerfile <<EOF
    FROM registry.access.redhat.com/ubi8/ubi-minimal
    COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    RUN mkdir -p /opt
    ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    EOF
    Copy to Clipboard Toggle word wrap
  7. Build the VDDK image:

    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    Copy to Clipboard Toggle word wrap
  8. Push the VDDK image to the registry:

    $ podman push <registry_route_or_server_path>/vddk:<tag>
    Copy to Clipboard Toggle word wrap
  9. Ensure that the image is accessible to your OpenShift Virtualization environment.
  10. Edit the HyperConverged CR in the openshift-cnv project:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
    Copy to Clipboard Toggle word wrap
  11. Add the vddkInitImage parameter to the spec stanza:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      vddkInitImage: <registry_route_or_server_path>/vddk:<tag>
    Copy to Clipboard Toggle word wrap

Chapter 3. Migrating virtual machines to OpenShift Virtualization

You can migrate virtual machines (VMs) to OpenShift Virtualization by using the MTV web console or the command line interface (CLI).

You can run a cold or a warm migration. For details, see Warm migration.

3.1. Migration environment requirements

Check your migration environment to ensure that the following requirements are met.

VMware environment requirements

  • VMware vSphere must be version 6.5 or later.
  • If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.
  • Virtual machines:

    • VMware Tools is installed.
    • ISO/CDROM disks are unmounted.
    • Each NIC must contain no more than one IPv4 and/or one IPv6 address.
    • VM name contains only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.
    • VM name does not duplicate the name of a VM in the OpenShift Virtualization environment.
    • Operating system is certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with virt-v2v.

Network requirements

  • IP addresses, VLANs, and other network configuration settings must not be changed before or after migration. The MAC addresses of the virtual machines are preserved during migration.
  • Uninterrupted and reliable network connections between the clusters and the replication repository.
  • Network ports enabled in the firewall rules.
Expand
Table 3.1. Network ports required for migration
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

VMware provider inventory

Disk transfer authentication

443

TCP

OpenShift nodes

VMware ESXi hosts

Disk transfer authentication

902

TCP

OpenShift nodes

VMware ESXi hosts

Disk transfer data copy

3.1.1. Increasing the NFC service memory of an ESXi host

If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host to enable additional connections for migrations. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

Procedure

  1. Log in to the ESXi host as root.
  2. Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    ...
          <nfcsvc>
             <path>libnfcsvc.so</path>
             <enabled>true</enabled>
             <maxMemory>1000000000</maxMemory>
             <maxStreamMemory>10485760</maxStreamMemory>
          </nfcsvc>
    ...
    Copy to Clipboard Toggle word wrap
  3. Restart hostd:

    # /etc/init.d/hostd restart
    Copy to Clipboard Toggle word wrap

    You do not need to reboot the host.

3.2. Migrating virtual machines by using the MTV web console

You can migrate virtual machines to OpenShift Virtualization by using the MTV web console.

3.2.1. Adding providers

You can add VMware and OpenShift Virtualization providers by using the MTV web console.

3.2.1.1. Adding a VMware source provider

You can add a VMware source provider by using the MTV web console.

Procedure

  1. In the MTV web console, click Providers.
  2. Click Add provider.
  3. Select VMware from the Type list.
  4. Fill in the following fields:

    • Name: Name to display in the list of providers
    • Hostname or IP address: vCenter host name or IP address
    • Username: vCenter admin user name, for example, administrator@vsphere.local
    • Password: vCenter admin password
    • SHA-1 fingerprint: vCenter SHA-1 fingerprint

      To obtain the vCenter SHA-1 fingerprint, enter the following command:

      $ openssl s_client \
          -connect <vcenter.example.com>:443 < /dev/null 2>/dev/null \ 
      1
      
          | openssl x509 -fingerprint -noout -in /dev/stdin \
          | cut -d '=' -f 2
      Copy to Clipboard Toggle word wrap
      1
      Specify the vCenter host name.
  5. Click Add to add and save the provider.

    The VMware provider appears in the list of providers.

Selecting a migration network for a VMware provider

You can select a migration network in the MTV web console for a VMware source provider to reduce risk to the VMware environment and to improve performance.

The default migration network is the management network. However, using the management network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the VMware platform because the disk transfer operation might saturate the network and impede communication between vCenter and the ESXi hosts.

Prerequisites

  • The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.
  • The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway.

    Note

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

  • The migration network must have jumbo frames enabled.
  • You must have administrator privileges for each ESXi host.

Procedure

  1. In the MTV web console, click Providers
  2. Click VMware.
  3. Click the host number in the Hosts column beside a VMware provider to view a list of hosts.
  4. Select one or more hosts and click Select migration network.
  5. Complete the following fields:

    • Network: Select the migration network.

      You can clear the migration network selection by selecting the default management network.

    • ESXi host admin username: Specify the ESXi host admin user name, for example, root.
    • ESXi host admin password: Specify the ESXi host password.
  6. Click Save.
  7. Verify that the status of each host is Ready.

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

3.2.1.2. Adding an OpenShift Virtualization provider

You can add an OpenShift Virtualization provider to the MTV web console in addition to the default OpenShift Virtualization provider, which is the provider where you installed MTV.

Prerequisites

Procedure

  1. In the MTV web console, click Providers.
  2. Click Add provider.
  3. Select OpenShift Virtualization from the Type list.
  4. Complete the following fields:

    • Cluster name: Specify the cluster name to display in the list of target providers.
    • URL: Specify the API endpoint of the cluster.
    • Service account token: Specify the cluster-admin service account token.
  5. Click Check connection to verify the credentials.
  6. Click Add.

    The provider appears in the list of providers.

Selecting a migration network for an OpenShift Virtualization provider

You can select a default migration network for an OpenShift Virtualization provider in the MTV web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

Note

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure

  1. In the MTV web console, click Providers.
  2. Click OpenShift Virtualization.
  3. Select a provider and click Select migration network.
  4. Select a network from the list of available networks and click Select.
  5. Click the network number in the Networks column beside the provider to verify that the selected network is the default migration network.

3.2.2. Creating a network mapping

You can create one or more network mappings by using the MTV web console to map source networks to OpenShift Virtualization networks.

Note

You cannot map an opaque network, typically managed by NSX, to an OpenShift Virtualization network.

Prerequisites

  • Source and target providers added to the web console.
  • If you map more than one source and target network, you must create a network attachment definition for each additional target network.

Procedure

  1. Click MappingsNetwork.
  2. Click Create mapping.
  3. Complete the following fields:

    • Name: Enter a name to display in the network mappings list.
    • Source provider: Select a source provider.
    • Target provider: Select a target provider.
    • Source networks: Select a source network.
    • Target namespaces/networks: Select a target network.
  4. Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.
  5. If you create an additional network mapping, select the network attachment definition as the target network.
  6. Click Create.

    The network mapping is displayed on the Network mappings screen.

3.2.3. Creating a storage mapping

You can create a storage mapping by using the MTV web console to map source data stores to OpenShift Virtualization storage classes.

Prerequisites

  • Source and target providers added to the web console.
  • Local and shared persistent storage that support VM migration.

Procedure

  1. Click MappingsStorage.
  2. Click Create mapping.
  3. Complete the following fields:

    • Name: Enter a name to display in the storage mappings list.
    • Source provider: Select a source provider.
    • Target provider: Select a target provider.
    • Source datastores: Select a source data store.
    • Target storage classes: Select a target storage class.
  4. Optional: Click Add to create additional storage mappings or to map multiple data stores to a single storage class.
  5. Click Create.

    The mapping is displayed on the Storage mappings screen.

3.2.4. Creating a migration plan

You can create a migration plan by using the MTV web console.

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

Prerequisites

  • You must add the VDDK image to the spec.vddkInitImage field of the HyperConverged custom resource (CR).
  • You must add a source provider to the web console.
  • If the target provider is not the OpenShift Virtualization cluster on which you installed MTV, you must add a target provider.
  • If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
  • If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host.

Procedure

  1. In the web console, click Migration plans and then click Create migration plan.
  2. Complete the following fields:

    • Plan name: Enter a migration plan name to display in the migration plan list.
    • Plan description: Optional. Brief description of the migration plan.
    • Source provider: Select a source provider.
    • Target provider: Select a target provider.
    • Target namespace: You can type to search for an existing target namespace or create a new namespace.
    • You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and clicking Select.

      If you defined a migration transfer network for the OpenShift Virtualization provider and if the network is in the target namespace, that network is the default network for all migration plans. Otherwise, the pod network is used.

  3. Click Next.
  4. Click By clusters and hosts or By folders, select clusters, hosts, or folders to filter the list of VMs, and then click Next.
  5. Select the VMs to migrate and then click Next.
  6. Select an existing network mapping or create a new network mapping.

    To create a new network mapping:

    • Select a target network for each source network.
    • Optional. Select Save mapping to use again and enter a network mapping name.
  7. Click Next.
  8. Select an existing storage mapping or create a new storage mapping.

    To create a new storage mapping:

    • Select a target storage class for each source data store.
    • Optional. Select Save mapping to use again and enter a storage mapping name.
  9. Click Next.
  10. Select Cold migration or Warm migration and click Next.

    • Cold migration: The source VMs are stopped while the data is copied.
    • Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.
  11. Review your migration plan and click Finish.

    The migration plan is saved in the migration plan list.

3.2.5. Running a migration plan

You can run a migration plan and view its progress in the MTV web console.

Prerequisites

  • Valid migration plan.

Procedure

  1. Click Migration plans.

    The Migration plans list displays the source and target providers, the number of VMs being migrated, and the status of the plan.

  2. Click Start beside a migration plan to start the migration.

    If the migration type is Warm, the precopy stage starts.

  3. Click Cutover beside a warm migration plan to complete the migration.
  4. Expand a migration plan to view the migration details.

    The migration details screen displays the migration start and end time, the amount of data copied, and a progress pipeline for each VM being migrated.

  5. Expand a VM to view the migration steps, elapsed time of each step, and the state.

3.2.6. Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the MTV web console.

Procedure

  1. Click Migration Plans.
  2. Click the name of a running migration plan to view the migration details.
  3. Select one or more VMs and click Cancel.
  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

3.3. Migrating virtual machines from the command line interface

You can migrate virtual machines (VMs) from the command line (CLI) by creating the following custom resources (CRs):

  • Secret contains the VMware provider credentials.
  • Provider contains the VMware provider details.
  • Host contains the VMware host details.
  • NetworkMap maps the source and destination networks.
  • StorageMap maps the source and destination storage.
  • Plan contains a list of VMs to migrate and specifies whether the migration is cold or warm. The Plan references the providers and maps.
  • Migration runs the Plan. If the migration is warm, it specifies the cutover time.

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

The term destination in the API is the same as target in the web console.

Important

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • The OpenShift CLI must be installed.
  • If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.
  • You must add the VDDK image to the spec.vddkInitImage field of the HyperConverged custom resource (CR).
  • If you are performing a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
  • If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host.

Procedure

  1. Obtain the vCenter SHA-1 fingerprint:

    $ openssl s_client \
        -connect <www.example.com>:443 \ 
    1
    
        < /dev/null 2>/dev/null \
        | openssl x509 -fingerprint -noout -in /dev/stdin \
        | cut -d '=' -f 2
    Copy to Clipboard Toggle word wrap
    1
    Specify the vCenter name.

    Example output

    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    Copy to Clipboard Toggle word wrap

  2. Create a Secret CR manifest for the VMware provider:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <vmware_secret>
      namespace: openshift-mtv
    type: Opaque
    stringData:
      user: <vcenter_user> 
    1
    
      password: <vcenter_password> 
    2
    
      thumbprint: <vcenter_fingerprint> 
    3
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the vCenter administrator account, for example, administrator@vsphere.local.
    2
    Specify the vCenter password.
    3
    Specify the vCenter SHA-1 fingerprint.
  3. Create a Provider CR manifest for the VMware provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <vmware_provider>
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: <api_end_point> 
    1
    
      secret:
        name: <vmware_secret> 
    2
    
        namespace: openshift-mtv
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the vSphere API end point, for example, https://<vcenter.host.com>/sdk.
    2
    Specify the name of the VMware Secret CR.
  4. Create a Host CR manifest for the VMware host:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: openshift-mtv
    spec:
      provider:
        namespace: openshift-mtv
        name: <source_provider> 
    1
    
      id: <source_host_mor> 
    2
    
      ipAddress: <source_network_ip> 
    3
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the VMware Provider CR.
    2
    Specify the managed object reference of the VMware host.
    3
    Specify the IP address of the VMware migration network.
  5. Create a NetworkMap CR manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: openshift-mtv
    spec:
      map:
        - destination:
            type: pod 
    1
    
          source:
            id: <source_network_mor> 
    2
    
        - destination:
            type: multus
            name: <network_attachment_definition> 
    3
    
            namespace: <network_attachment_definition_namespace> 
    4
    
          source:
            id: <source_network_mor>
      provider:
        source:
          name: <vmware_provider>
          namespace: openshift-mtv
        destination:
          name: <destination_cluster>
          namespace: openshift-mtv
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Allowed values are pod and multus.
    2
    Specify the managed object reference of the VMware network.
    3
    Specify the network attachment definition for each additional destination network.
    4
    Specify the namespace of the network attachment definition.
  6. Create a StorageMap CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: openshift-mtv
    spec:
      map:
        - destination:
            storageClass: <storage_class>
          source:
            id: <source_datastore_mor> 
    1
    
      provider:
        source:
          name: <vmware_provider>
          namespace: openshift-mtv
        destination:
          name: <destination_cluster>
          namespace: openshift-mtv
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the managed object reference of the VMware data storage.
  7. Create a Plan CR manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: openshift-mtv
    spec:
      provider:
        source:
          name: <vmware_provider>
          namespace: openshift-mtv
        destination:
          name: <destination_cluster>
          namespace: openshift-mtv
      warm: true 
    2
    
      map:
        network: 
    3
    
          name: <network_map> 
    4
    
          namespace: openshift-mtv
        storage:
          name: <storage_map> 
    5
    
          namespace: openshift-mtv
      targetNamespace: openshift-mtv
      vms: 
    6
    
        - id: <source_vm_mor> 
    7
    
        - name: <source_vm>
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the Plan CR.
    2
    Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration CR manifest, only the precopy stage will run.
    3
    You can create multiple network mappings for source and destination networks.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify the name of the StorageMap CR.
    6
    You can use either the id or the name parameter to specify the source VMs.
    7
    Specify the managed object reference of the VMware VM.
  8. Optional: To change the time interval between the CBT snapshots for warm migration, patch the vm-import-controller-config config map:

    $ oc patch configmap/vm-import-controller-config \
      -n openshift-cnv -p '{"data": \
      {"warmImport.intervalMinutes": "<interval>"}}' 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify the time interval in minutes. The default value is 60.
  9. Create a Migration CR manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration> 
    1
    
      namespace: openshift-mtv
    spec:
      plan:
        name: <plan> 
    2
    
        namespace: openshift-mtv
      cutover: <cutover_time> 
    3
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the Migration CR.
    2
    Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachineImport CR for each VM that is migrated.
    3
    Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.
  10. View the VirtualMachineImport pods to monitor the progress of the migration:

    $ oc get pods -n openshift-mtv
    Copy to Clipboard Toggle word wrap

3.3.1. Canceling a migration

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration_name> -n openshift-mtv 
    1
    Copy to Clipboard Toggle word wrap

Canceling the migration of individual VMs

  1. Add the individual VMs to the Migration CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration_name>
      namespace: openshift-mtv
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
      - name: rhel8-vm
    EOF
    Copy to Clipboard Toggle word wrap
    1 1
    The id is the managed object reference of the source VM.
  2. View the VirtualMachineImport pods to monitor the progress of the remaining VMs:

    $ oc get pods -n openshift-mtv
    Copy to Clipboard Toggle word wrap

Chapter 4. Upgrading the Migration Toolkit for Virtualization

You can upgrade the MTV Operator by using the OpenShift Container Platform web console to install the new version.

Note

You must upgrade to the next release without skipping a release, for example, from 2.0 to 2.1 or from 2.1 to 2.2.

See Upgrading installed Operators in the OpenShift Container Platform documentation.

Chapter 5. Uninstalling the Migration Toolkit for Virtualization

You can uninstall the Migration Toolkit for Virtualization (MTV) by using the OpenShift Container Platform web console or the command line interface (CLI).

5.1. Uninstalling MTV by using the OpenShift Container Platform web console

You can uninstall Migration Toolkit for Virtualization (MTV) by using the OpenShift Container Platform web console to delete the openshift-mtv project and custom resource definitions (CRDs).

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Click HomeProjects.
  2. Enter rhmtv in the Search field to locate the openshift-mtv project.
  3. On the right side of the project, select Delete Project from the Options menu kebab .
  4. In the Delete Project pane, enter the project name and click Delete.
  5. Click AdministrationCustomResourceDefinitions.
  6. Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.
  7. On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu kebab .

5.2. Uninstalling MTV from the command line interface

You can uninstall Migration Toolkit for Virtualization (MTV) from the command line interface (CLI) by deleting the openshift-mtv project and the forklift.konveyor.io custom resource definitions (CRDs).

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Delete the project:

    $ oc delete project openshift-mtv
    Copy to Clipboard Toggle word wrap
  2. Delete the CRDs:

    $ oc get crd -o name | grep 'forklift' | xargs oc delete
    Copy to Clipboard Toggle word wrap
  3. Delete the OAuthClient:

    $ oc delete oauthclient/forklift-ui
    Copy to Clipboard Toggle word wrap

Chapter 6. Troubleshooting

This section describes ways to troubleshoot migration issues and error messages.

6.1. Error messages

This section describes error messages and how to resolve them.

warm import retry limit reached

The warm import retry limit reached error message is displayed during a warm migration if the virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage. You must delete some of the CBT snapshots from the VM and restart the migration plan.

6.2. Using the must-gather tool

You can collect logs and information about MTV custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

Prerequisites

Collecting logs and CR information

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the oc adm must-gather command:

    $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.0.0
    Copy to Clipboard Toggle word wrap

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

6.3. Known issues

This section describes known issues and mitigations.

Network map displays a Destination network not found error

If the network map remains in a NotReady state and the NetworkMap CR manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

Warm migration gets stuck during third precopy

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

You can do one of the following to mitigate this issue:

  • Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.
  • Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    $ oc patch configmap/vm-import-controller-config \
      -n openshift-cnv -p '{"data": \
      {"warmImport.intervalMinutes": "720"}}'
    Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2021 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat