Chapter 11. Migrating virtual machines from the command line


You can migrate virtual machines to OpenShift Virtualization from the command line.

Important

You must ensure that all prerequisites are met.

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

Expand
Table 11.1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

  • Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
  • Attach providers created by administrators to storage maps, network maps, and migration plans
  • Not be able to create providers or to change system settings
Expand
Table 11.2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

Note

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

11.2. Migrating virtual machines

You migrate virtual machines (VMs) using the command-line interface (CLI) by creating MTV custom resources (CRs). The CRs and the migration procedure vary by source provider.

Important

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

To migrate to or from an OpenShift cluster that is different from the one the migration plan is defined on, you must have an OpenShift Virtualization service account token with cluster-admin privileges.

You can migrate from a VMware vSphere source provider by using the command-line interface (CLI).

Important

Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration.

Important

MTV does not support migrating VMware Non-Volatile Memory Express (NVMe) disks.

Note

To migrate virtual machines (VMs) that have shared disks, see Migrating virtual machines with shared disks.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 
    1
    
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: vsphere
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 
    2
    
      password: <password> 
    3
    
      insecureSkipVerify: <"true"/"false"> 
    4
    
      cacert: | 
    5
    
        <ca_certificate>
      url: <api_end_point> 
    6
    
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the vCenter user or the ESX/ESXi user.
    3
    Specify the password of the vCenter user or the ESX/ESXi user.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example, https://<vCenter_host>/sdk.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: vsphere
      url: <api_end_point> 
    1
    
      settings:
        vddkInitImage: <VDDK_image> 
    2
    
        sdkEndpoint: vcenter 
    3
    
      secret:
        name: <secret> 
    4
    
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint, for example, https://<vCenter_host>/sdk.
    2
    Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
    3
    Options: vcenter or esxi.
    4
    Specify the name of the provider Secret CR.
  1. Create a Host manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: <namespace>
    spec:
      provider:
        namespace: <namespace>
        name: <source_provider> 
    1
    
      id: <source_host_mor> 
    2
    
      ipAddress: <source_network_ip> 
    3
    
    EOF
    1
    Specify the name of the VMware vSphere Provider CR.
    2
    Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3
    Specify the IP address of the VMware vSphere migration network.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 
    1
    
          source: 
    2
    
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 
    3
    
            namespace: <network_attachment_definition_namespace> 
    4
    
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod, multus, and ignored. Use ignored to avoid attaching VMs to this network for this migration.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 
    1
    
          source:
            id: <source_datastore> 
    2
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the VMware vSphere datastore moRef. For example, f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 
    1
    
      playbook: |
        LS0tCi0gbm... 
    2
    
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: <namespace>
    spec:
      warm: false 
    2
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 
    3
    
        network: 
    4
    
          name: <network_map> 
    5
    
          namespace: <namespace>
        storage: 
    6
    
          name: <storage_map> 
    7
    
          namespace: <namespace>
      preserveStaticIPs: 
    8
    
      networkNameTemplate: <network_interface_template> 
    9
    
      pvcNameTemplate: <pvc_name_template> 
    10
    
      pvcNameTemplateUseGenerateName: true 
    11
    
      targetNamespace: <target_namespace>
      volumeNameTemplate: <volume_name_template> 
    12
    
      vms: 
    13
    
        - id: <source_vm1> 
    14
    
        - name: <source_vm2>
          networkNameTemplate: <network_interface_template_for_this_vm> 
    15
    
          pvcNameTemplate: <pvc_name_template_for_this_vm> 
    16
    
          volumeNameTemplate: <volume_name_template_for_this_vm> 
    17
    
          targetName: <target_name> 
    18
    
          hooks: 
    19
    
            - hook:
                namespace: <namespace>
                name: <hook> 
    20
    
              step: <step> 
    21
    
    
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify whether the migration is warm - true - or cold - false. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    3
    Specify only one network map and one storage map per plan.
    4
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    5
    Specify the name of the NetworkMap CR.
    6
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    7
    Specify the name of the StorageMap CR.
    8
    By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP address linked to the interface name in the guest VM lose their IP address.
    To avoid this, set preserveStaticIPs to true. MTV issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to MTV.
    9
    Optional. Specify a template for the network interface name for the VMs in your plan. The template follows the Go template syntax and has access to the following variables:
    • .NetworkName: If the target network is multus, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty.
    • .NetworkNamespace: If the target network is multus, add the namespace where the Multus Network Attachment Definition is located.
    • .NetworkType: Specifies the network type. Options: multus or pod.
    • .NetworkIndex: Sequential index of the network interface (0-based).

      Examples

    • "net-{{.NetworkIndex}}"
    • {{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"

      Variable names cannot exceed 63 characters. This rule applies to a network name network template, a PVC name template, a VM name template, and a volume name template.

    10
    Optional. Specify a template for the persistent volume claim (PVC) name for a plan. The template follows the Go template syntax and has access to the following variables:
    • .VmName: Name of the VM.
    • .PlanName: Name of the migration plan.
    • .DiskIndex: Initial volume index of the disk.
    • .RootDiskIndex: Index of the root disk.
    • .Shared: Options: true, for a shared volume, false, for a non-shared volume.

      Examples

    • "{{.VmName}}-disk-{{.DiskIndex}}"
    • "{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}"
    • "{{if .Shared}}shared-{{end}}{{.VmName}}-{{.DiskIndex}}"
    11
    Optional:
    • When set to true, MTV adds one or more randonly generated alphanumeric characters to the name of the PVC in order to ensure all PVCs have unique names.
    • When set to false, if you specify a pvcNameTemplate, MTV does not add such charchters to the name of the PVC.

      Warning

      If you set pvcNameTemplateUseGenerateName to false, the generated PVC name might not be unique and might cause conflicts.

    12
    Optional: Specify a template for the volume interface name for the VMs in your plan. The template follows the Go template syntax and has access to the following variables:
    • .PVCName: Name of the PVC mounted to the VM using this volume.
    • .VolumeIndex: Sequential index of the volume interface (0-based).

      Examples

    • "disk-{{.VolumeIndex}}"
    • "pvc-{{.PVCName}}"
    13
    You can use either the id or the name parameter to specify the source VMs.
    14
    Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    15
    Optional: Specify a network interface name for the specific VM. Overrides the value set in spec:networkNameTemplate. Variables and examples as in callout 9.
    16
    Optional: Specify a PVC name for the specific VM. Overrides the value set in spec:pvcNameTemplate. Variables and examples as in callout 10.
    17
    Optional: Specify a volume name for the specific VM. Overrides the value set in spec:volumeNameTemplate. Variables and examples as in callout 12.
    18
    Optional: MTV automatically generates a name for the target VM. You can override this name by using this parameter and entering a new name. The name you enter must be unique and it must be a valid Kubernetes subdomain. Otherwise, the migration fails automatically.
    19
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    20
    Specify the name of the Hook CR.
    21
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    Important

    When you migrate a VMware 7 VM to an OpenShift 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

Important

There is an issue with the forklift-controller consistently failing to reconcile a migration plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

In MTV, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

11.3.1. Retrieving a VMware vSphere moRef

When you migrate VMs with a VMware vSphere source provider using Migration Toolkit for Virtualization (MTV) from the command line, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

Procedure

  1. Retrieve the routes for the project:

    oc get route -n openshift-mtv
  2. Retrieve the Inventory service route:

    $ oc get route <inventory_service> -n openshift-mtv
  3. Retrieve the access token:

    $ TOKEN=$(oc whoami -t)
  4. Retrieve the moRef of a VMware vSphere provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
  5. Retrieve the datastores of a VMware vSphere source provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k

    Example output

    [
      {
        "id": "datastore-11",
        "parent": {
          "kind": "Folder",
          "id": "group-s5"
        },
        "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
        "revision": 46,
        "name": "v2v_general_porpuse_ISCSI_DC",
        "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
      },
      {
        "id": "datastore-730",
        "parent": {
          "kind": "Folder",
          "id": "group-s5"
        },
        "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
        "revision": 46,
        "name": "f01-h27-640-SSD_2",
        "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
      },
     ...

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

You can migrate VMware virtual machines (VMs) with shared disks by using the Migration Toolkit for Virtualization (MTV). This functionality is available only for cold migrations and is not available for shared boot disks.

Shared disks are disks that are attached to more than one VM and that use the multi-writer option. As a result of these characteristics, shared disks are difficult to migrate.

In certain situations, applications in VMs require shared disks. Databases and clustered file systems are the primary use cases for shared disks.

MTV version 2.7.11 or later includes a parameter named migrateSharedDisks in Plan custom resources (CRs) that instructs MTV to either migrate shared disks or to skip them during migration, as follows:

  • If set to true, MTV migrates the shared disks. MTV uses the regular cold migration flow using virt-v2v and labeling the shared persistent volume claims (PVCs).
  • If set to false, MTV skips the shared disks. MTV uses the KubeVirt Containerized-Data-Importer (CDI) for disk transfer.

After the disk transfer, MTV automatically attempts to locate the already shared PVCs and the already migrated shared disks and attach them to the VMs.

By default, migrateSharedDisks is set to true.

To successfully migrate VMs with shared disks, create two Plan CRs as follows:

  • In the first, set migrateSharedDisks to true.

    MTV migrates the following:

    • All shared disks.
    • For each shared disk, one of the VMs that is attached to it. If possible, choose VMs so that the plan does not contain any shared disks that are connected to more than one VM. See the following figures for further guidance.
    • All unshared disks attached to the VMs you choose for this plan.
  • In the second, set migrateSharedDisks to false.

    MTV migrates the following:

    • All other VMs.
    • The unshared disks of the VMs in the second Plan CR.

When MTV migrates a VM that has a shared disk to it, it does not check if it has already migrated that shared disk. Therefore, it is important to allocate the VMs in each of the two so that each shared disk is migrated once and only once.

To understand how to assign VMs and shared disks to each of the Plan CRs, consider the two figures that follow. In both, migrateSharedDisks is set to true for plan1 and set to false for plan2.

In the first figure, the VMs and shared disks are assigned correctly:

Figure 11.1. Example of correctly assigned VMs and shared disks

Example successful migration

plan1 migrates VMs 2 and 4, shared disks 1, 2, and 3, and the non-shared disks of VMs 2 and 4. VMs 2 and 4 are included in this plan, because they connect to all the shared disks once each.

plan2 migrates VMs 1 and 3 and their non-shared disks. plan2 does not migrate the shared disks connected to VMs 1 and 3 because migrateSharedDisks is set to false.

MTV migrates each VMs and its disks as follows:

  1. From plan1:

    1. VM 3, shared disks 1 and 2, and the non-shared disks attached to VM 3.
    2. VM 4, shared disk 3, and the non-shared disks attached to VM 4.
  2. From plan2:

    1. VM 1 and the non-shared disks attached to it.
    2. VM 2 and the non-shared disks attached to it.

The result is that VMs 2 and 4, all the shared disks, and all the non-shared disks are migrated, but only once. MTV is able to reattach all VMs to their disks, including the shared disks.

In second figure, the VMs and shared disks are not assigned correctly:

Figure 11.2. Example of incorrectly assigned VMs and shared disks

Complex cyclic shared disk dependencies

In this case, MTV migrates each VMs and its disks as follows:

  1. From plan1:

    1. VM 2, shared disks 1 and 2, and the non-shared disks attached to VM 2.
    2. VM 3, shared disks 2 and 3, and the non-shared disks attached to VM 3.
  2. From plan2:

    1. VM 1 and the non-shared disks attached to it.
    2. VM 4 and the non-shared disks attached to it.

This migration "succeeds", but it results in a problem: Shared disk 2 is migrated twice by the first Plan CR. You can resolve this problem by using one of the two workarounds that are discussed in the Known issues section, which follows the procedure.

Procedure

  1. In MTV, create a migration plan for the shared disks, the minimum number of VMs connected to them, and the unshared disk of those VMs.
  2. On the VMware cluster, power off all VMs attached to the shared disks.
  3. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.
  4. Select the desired plan.

    The Plan details page opens.

  5. Click the YAML tab of the plan.
  6. Verify that migrateSharedDisks is set to true.

    Example Plan CR with migrateSharedDisks set to true

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
     name: transfer-shared-disks
     namespace: openshift-mtv
    spec:
     map:
       network:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: NetworkMap
         name: vsphere-7gxbs
         namespace: openshift-mtv
         uid: a3c83db3-1cf7-446a-b996-84c618946362
       storage:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: StorageMap
         name: vsphere-mqp7b
         namespace: openshift-mtv
         uid: 20b43d4f-ded4-4798-b836-7c0330d552a0
     migrateSharedDisks: true
     provider:
       destination:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: host
         namespace: openshift-mtv
         uid: abf4509f-1d5f-4ff6-b1f2-18206136922a
       source:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: vsphere
         namespace: openshift-mtv
         uid: be4dc7ab-fedd-460a-acae-a850f6b9543f
     targetNamespace: openshift-mtv
     vms:
       - id: vm-69
         name: vm-1-with-shared-disks

  7. Start the migration of the first plan and wait for it to finish.
  8. Create a second Plan CR to migrate all the other VMs and their unshared disks to the same target namespace as the first.
  9. In the Plans for virtualization page of the Red Hat OpenShift web console, select the new plan.

    The Plan details page opens.

  10. Click the YAML tab of the plan.
  11. Set migrateSharedDisks to false.

    Example Plan CR with migrateSharedDisks set to false

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
     name: skip-shared-disks
     namespace: openshift-mtv
    spec:
     map:
       network:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: NetworkMap
         name: vsphere-7gxbs
         namespace: openshift-mtv
         uid: a3c83db3-1cf7-446a-b996-84c618946362
       storage:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: StorageMap
         name: vsphere-mqp7b
         namespace: openshift-mtv
         uid: 20b43d4f-ded4-4798-b836-7c0330d552a0
     migrateSharedDisks: false
     provider:
       destination:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: host
         namespace: openshift-mtv
         uid: abf4509f-1d5f-4ff6-b1f2-18206136922a
       source:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: vsphere
         namespace: openshift-mtv
         uid: be4dc7ab-fedd-460a-acae-a850f6b9543f
     targetNamespace: openshift-mtv
     vms:
       - id: vm-71
         name: vm-2-with-shared-disks

  12. Start the migration of the second plan and wait for it to finish.
  13. Verify that all shared disks are attached to the same VMs as they were before migration and that none are duplicated. In case of problems, see the discussion of known issues that follows.

11.3.2.1. Known issues

11.3.2.1.1. Cyclic shared disk dependencies

Problem: VMs with cyclic shared disk dependencies cannot be migrated successfully.

Explanation: When migrateSharedDisks is set to true, MTV migrates each VM in the plan, one by one, and any shared disks attached to it, without determining if a shared disk was already migrated.

In the case of 2 VMs sharing one disk, there is no problem. MTV transfers the shared disk and attaches the 2 VMs to the shared disk after the migration.

However, if there is a cyclic dependency of shared disks between 3 or more VMs, MTV either duplicates or omits one of the shared disks. The figure that follows illustrates the simplest version of this problem.

Figure 11.3. Simple example of cyclic shared disks

Simple cyclic shared disk dependencies

In this case, the VMs and shared disks cannot be migrated in the same Plan CR. Although this problem could be solved using migrateSharedDisks and 2 Plan CRs, it illustrates the basic issue that must be avoided in migrating VMs with shared disks.

11.3.2.1.2. Workarounds

As discussed previously, it is important to try to create 2 Plan CRs in which each shared disk is migrated once. However, if your migration does result in a shared disk either duplicated or not being transferred, you can use one of the following workarounds:

  • Duplicate one of the shared disks
  • "Remove" one of the shared disks
11.3.2.1.2.1. Duplicate a shared disk

In the figure that follows, VMs 2 and 3 are migrated with the shared disks in the first plan, and VM 1 is migrated in the second plan. Doing this breaks the cyclic dependencies, but this workaround has a drawback: It results in shared disk 3 being duplicated. The solution is to remove the duplicated PV and migrate VM 1 again.

Figure 11.4. Duplicated shared disk

Duplicate a shared disk

Advantage:

The source VMs are not affected.

Disadvantage:

One shared disk gets transferred twice, so you need to manually delete the duplicate disk and reconnect VM 3 to shared disk 3 in Red Hat OpenShift after the migration..

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 
    1
    1
    Specify the name of the Migration CR.

Canceling the migration of specific VMs

  1. Add the specific VMs to the spec.cancel block of the Migration manifest:

    Example YAML for canceling the migrations of two VMs

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
        name: rhel8-vm
    EOF

    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml

You can migrate from a Red Hat Virtualization (RHV) source provider by using the command-line interface (CLI).

Prerequisites

If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.

Note
  • Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
  • LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 
    1
    
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: ovirt
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 
    2
    
      password: <password> 
    3
    
      insecureSkipVerify: <"true"/"false"> 
    4
    
      cacert: | 
    5
    
        <ca_certificate>
      url: <api_end_point> 
    6
    
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the RHV Manager user.
    3
    Specify the user password.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    Enter the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
    6
    Specify the API endpoint URL, for example, https://<engine_host>/ovirt-engine/api.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: ovirt
      url: <api_end_point> 
    1
    
      secret:
        name: <secret> 
    2
    
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint, for example, https://<engine_host>/ovirt-engine/api.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 
    1
    
          source: 
    2
    
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 
    3
    
            namespace: <network_attachment_definition_namespace> 
    4
    
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the RHV network Universal Unique ID (UUID).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 
    1
    
          source:
            id: <source_storage_domain> 
    2
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the RHV storage domain UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 
    1
    
      playbook: |
        LS0tCi0gbm... 
    2
    
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: <namespace>
      preserveClusterCpuModel: true 
    2
    
    spec:
      warm: false 
    3
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 
    4
    
        network: 
    5
    
          name: <network_map> 
    6
    
          namespace: <namespace>
        storage: 
    7
    
          name: <storage_map> 
    8
    
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 
    9
    
        - id: <source_vm1> 
    10
    
        - name: <source_vm2>
          hooks: 
    11
    
            - hook:
                namespace: <namespace>
                name: <hook> 
    12
    
              step: <step> 
    13
    
    EOF
    1
    Specify the name of the Plan CR.
    2
    See note below.
    3
    Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    4
    Specify only one network map and one storage map per plan.
    5
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    6
    Specify the name of the NetworkMap CR.
    7
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    8
    Specify the name of the StorageMap CR.
    9
    You can use either the id or the name parameter to specify the source VMs.
    10
    Specify the RHV VM UUID.
    11
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12
    Specify the name of the Hook CR.
    13
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    Note
    • If the migrated machine is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of preserveClusterCpuModel.
    • If the migrated machine is not set with a custom CPU model:

      • If preserveClusterCpuModel is set to 'true`, MTV checks the CPU model of the VM when it runs in RHV, based on the cluster’s configuration, and then sets the migrated VM with that CPU model.
      • If preserveClusterCpuModel is set to 'false`, MTV does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 
    1
    1
    Specify the name of the Migration CR.

Canceling the migration of specific VMs

  1. Add the specific VMs to the spec.cancel block of the Migration manifest:

    Example YAML for canceling the migrations of two VMs

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
        name: rhel8-vm
    EOF

    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml

11.5. Migrating from an OpenStack source provider

You can migrate from an OpenStack source provider by using the command-line interface (CLI).

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 
    1
    
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: openstack
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 
    2
    
      password: <password> 
    3
    
      insecureSkipVerify: <"true"/"false"> 
    4
    
      domainName: <domain_name>
      projectName: <project_name>
      regionName: <region_name>
      cacert: | 
    5
    
        <ca_certificate>
      url: <api_end_point> 
    6
    
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the OpenStack user.
    3
    Specify the user OpenStack password.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the API endpoint URL, for example, https://<identity_service>/v3.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: openstack
      url: <api_end_point> 
    1
    
      secret:
        name: <secret> 
    2
    
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 
    1
    
          source:
    2
    
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 
    3
    
            namespace: <network_attachment_definition_namespace> 
    4
    
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the OpenStack network UUID.
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 
    1
    
          source:
            id: <source_volume_type> 
    2
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 
    1
    
      playbook: |
        LS0tCi0gbm... 
    2
    
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 
    2
    
        network: 
    3
    
          name: <network_map> 
    4
    
          namespace: <namespace>
        storage: 
    5
    
          name: <storage_map> 
    6
    
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 
    7
    
        - id: <source_vm1> 
    8
    
        - name: <source_vm2>
          hooks: 
    9
    
            - hook:
                namespace: <namespace>
                name: <hook> 
    10
    
              step: <step> 
    11
    
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    You can use either the id or the name parameter to specify the source VMs.
    8
    Specify the OpenStack VM UUID.
    9
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10
    Specify the name of the Hook CR.
    11
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 
    1
    1
    Specify the name of the Migration CR.

Canceling the migration of specific VMs

  1. Add the specific VMs to the spec.cancel block of the Migration manifest:

    Example YAML for canceling the migrations of two VMs

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
        name: rhel8-vm
    EOF

    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml

You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the command-line interface (CLI).

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 
    1
    
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: ova
        createdForResourceType: providers
    type: Opaque
    stringData:
      url: <nfs_server:/nfs_path> 
    2
    
    EOF
    1
    The ownerReferences section is optional.
    2
    where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: ova
      url:  <nfs_server:/nfs_path> 
    1
    
      secret:
        name: <secret> 
    2
    
        namespace: <namespace>
    EOF
    1
    where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 
    1
    
          source:
            id: <source_network_id> 
    2
    
        - destination:
            name: <network_attachment_definition> 
    3
    
            namespace: <network_attachment_definition_namespace> 
    4
    
            type: multus
          source:
            id: <source_network_id>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    Specify the OVA network Universal Unique ID (UUID).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 
    1
    
          source:
            name:  Dummy storage for source provider <provider_name> 
    2
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 
    1
    
      playbook: |
        LS0tCi0gbm... 
    2
    
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 
    2
    
        network: 
    3
    
          name: <network_map> 
    4
    
          namespace: <namespace>
        storage: 
    5
    
          name: <storage_map> 
    6
    
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 
    7
    
        - id: <source_vm1> 
    8
    
        - name: <source_vm2>
          hooks: 
    9
    
            - hook:
                namespace: <namespace>
                name: <hook> 
    10
    
              step: <step> 
    11
    
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    You can use either the id or the name parameter to specify the source VMs.
    8
    Specify the OVA VM UUID.
    9
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10
    Specify the name of the Hook CR.
    11
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 
    1
    1
    Specify the name of the Migration CR.

Canceling the migration of specific VMs

  1. Add the specific VMs to the spec.cancel block of the Migration manifest:

    Example YAML for canceling the migrations of two VMs

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
        name: rhel8-vm
    EOF

    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml

You can use a Red Hat OpenShift Virtualization provider as either a source provider or as a destination provider. You can migrate from an OpenShift Virtualization source provider by using the command-line interface (CLI).

Note

The Red Hat OpenShift cluster version of the source provider must be 4.16 or later.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 
    1
    
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: openshift
        createdForResourceType: providers
    type: Opaque
    stringData:
      token: <token> 
    2
    
      password: <password> 
    3
    
      insecureSkipVerify: <"true"/"false"> 
    4
    
      cacert: | 
    5
    
        <ca_certificate>
      url: <api_end_point> 
    6
    
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify a token for a service account with cluster-admin privileges. If both token and url are left blank, the local OpenShift cluster is used.
    3
    Specify the user password.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the URL of the endpoint of the API server.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: openshift
      url: <api_end_point> 
    1
    
      secret:
        name: <secret> 
    2
    
        namespace: <namespace>
    EOF
    1
    Specify the URL of the endpoint of the API server.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 
    1
    
          source:
            name: <network_name>
            type: pod
        - destination:
            name: <network_attachment_definition> 
    2
    
            namespace: <network_attachment_definition_namespace> 
    3
    
            type: multus
          source:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    Specify a network attachment definition for each additional OpenShift Virtualization network. Specify the namespace either by using the namespace property or with a name built as follows: <network_namespace>/<network_name>.
    3
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 
    1
    
          source:
            name: <storage_class>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 
    1
    
      playbook: |
        LS0tCi0gbm... 
    2
    
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 
    2
    
        network: 
    3
    
          name: <network_map> 
    4
    
          namespace: <namespace>
        storage: 
    5
    
          name: <storage_map> 
    6
    
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms:
        - name: <source_vm>
          namespace: <namespace>
          hooks: 
    7
    
            - hook:
                namespace: <namespace>
                name: <hook> 
    8
    
              step: <step> 
    9
    
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    8
    Specify the name of the Hook CR.
    9
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 
    1
    1
    Specify the name of the Migration CR.

Canceling the migration of specific VMs

  1. Add the specific VMs to the spec.cancel block of the Migration manifest:

    Example YAML for canceling the migrations of two VMs

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
        name: rhel8-vm
    EOF

    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top