Search

Chapter 5. Migrating virtual machines from the command line

download PDF

You can migrate virtual machines to OpenShift Virtualization from the command line.

Important

You must ensure that all prerequisites are met.

5.1. Permissions needed by non-administrators to work with migration plan components

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

Table 5.1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

  • Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
  • Attach providers created by administrators to storage maps, network maps, and migration plans
  • Not be able to create providers or to change system settings
Table 5.2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

Note

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

5.2. Migrating virtual machines

You migrate virtual machines (VMs) from the command line (CLI) by creating MTV custom resources (CRs). The CRs and the migration procedure vary by source provider.

Important

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

In order to migrate to or from an OpenShift cluster that is different than the one the migration plan is defined on, you must have an OpenShift Virtualization service account token with cluster-admin privileges.

5.2.1. Migrating from a VMware vSphere source provider

You can migrate from a VMware vSphere source provider using the CLI.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: vsphere
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the vCenter user or the ESX/ESXi user.
    3
    Specify the password of the vCenter user or the ESX/ESXi user.
    4
    Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the API endpoint URL of the vCenter or the ESX/ESX, for example, https://<vCenter_host>/sdk.
  1. Create a Provider manifest for the source provider:

    +

$ cat << EOF | {oc} apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: Provider
metadata:
  name: <source_provider>
  namespace: <namespace>
spec:
  type: vsphere
  url: <api_end_point> 1
  settings:
    vddkInitImage: <VDDK_image> 2
    sdkEndpoint: vcenter 3
  secret:
    name: <secret> 4
    namespace: <namespace>
EOF
1
Specify the URL of the API endpoint, for example, https://<vCenter_host>/sdk.
2
Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
3
Options: vcenter or esxi.
4
Specify the name of the provider Secret CR.
  1. Create a Host manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: <namespace>
    spec:
      provider:
        namespace: <namespace>
        name: <source_provider> 1
      id: <source_host_mor> 2
      ipAddress: <source_network_ip> 3
    EOF
    1
    Specify the name of the VMware vSphere Provider CR.
    2
    Specify the Managed Object Reference (MoRef) of the VMware vSphere host.
    3
    Specify the IP address of the VMware vSphere migration network.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source: 2
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the VMware vSphere network Managed Object Reference (MoRef).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_datastore> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the VMware vSphere data storage MoRef. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/konveyor/hook-runner
      playbook: |
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF

    where:

    playbook refers to an optional Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      warm: false 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 3
        network: 4
          name: <network_map> 5
          namespace: <namespace>
        storage: 6
          name: <storage_map> 7
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 8
        - id: <source_vm> 9
        - name: <source_vm>
          hooks: 10
            - hook:
                namespace: <namespace>
                name: <hook> 11
              step: <step> 12
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify whether the migration is warm - true - or cold - false. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    3
    Specify only one network map and one storage map per plan.
    4
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    5
    Specify the name of the NetworkMap CR.
    6
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    7
    Specify the name of the StorageMap CR.
    8
    You can use either the id or the name parameter to specify the source VMs.
    9
    Specify the VMware vSphere VM MoRef.
    10
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    11
    Specify the name of the Hook CR.
    12
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  2. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

5.2.2. Migrating from a Red Hat Virtualization source provider

You can migrate from a Red Hat Virtualization (RHV) source provider using the CLI.

Prerequisites

If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.

Note
  • Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
  • LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: ovirt
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the RHV Manager user.
    3
    Specify the user password.
    4
    Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    Enter the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
    6
    Specify the API endpoint URL, for example, https://<engine_host>/ovirt-engine/api.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: ovirt
      url: <api_end_point> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint, for example, https://<engine_host>/ovirt-engine/api.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source: 2
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the RHV network Universal Unique ID (UUID).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_storage_domain> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the RHV storage domain UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/konveyor/hook-runner
      playbook: |
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF

    where:

    playbook refers to an optional Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
      preserveClusterCpuModel: true 2
    spec:
      warm: false 3
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 4
        network: 5
          name: <network_map> 6
          namespace: <namespace>
        storage: 7
          name: <storage_map> 8
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 9
        - id: <source_vm> 10
        - name: <source_vm>
          hooks: 11
            - hook:
                namespace: <namespace>
                name: <hook> 12
              step: <step> 13
    EOF
    1
    Specify the name of the Plan CR.
    2
    See note below.
    3
    Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    4
    Specify only one network map and one storage map per plan.
    5
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    6
    Specify the name of the NetworkMap CR.
    7
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    8
    Specify the name of the StorageMap CR.
    9
    You can use either the id or the name parameter to specify the source VMs.
    10
    Specify the RHV VM UUID.
    11
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12
    Specify the name of the Hook CR.
    13
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    Note
    • If the migrated machines is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of preserveClusterCpuModel.
    • If the migrated machine is not set with a custom CPU model:

      • If preserveClusterCpuModel is set to 'true`, MTV checks the CPU model of the VM when it runs in RHV, based on the cluster’s configuration, and then sets the migrated VM with that CPU model.
      • If preserveClusterCpuModel is set to 'false`, MTV does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
  2. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

5.2.3. Migrating from an OpenStack source provider

You can migrate from an OpenStack source provider using the CLI.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: openstack
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      domainName: <domain_name>
      projectName: <project_name>
      regionName: <region_name>
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the OpenStack user.
    3
    Specify the user OpenStack password.
    4
    Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the API endpoint URL, for example, https://<identity_service>/v3.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: openstack
      url: <api_end_point> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source:2
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the OpenStack network UUID.
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_volume_type> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/konveyor/hook-runner
      playbook: |
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF

    where:

    playbook refers to an optional Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 2
        network: 3
          name: <network_map> 4
          namespace: <namespace>
        storage: 5
          name: <storage_map> 6
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 7
        - id: <source_vm> 8
        - name: <source_vm>
          hooks: 9
            - hook:
                namespace: <namespace>
                name: <hook> 10
              step: <step> 11
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    You can use either the id or the name parameter to specify the source VMs.
    8
    Specify the OpenStack VM UUID.
    9
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10
    Specify the name of the Hook CR.
    11
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  2. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

5.2.4. Migrating from an Open Virtual Appliance (OVA) source provider

You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider using the CLI.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: ova
        createdForResourceType: providers
    type: Opaque
    stringData:
      url: <nfs_server:/nfs_path> 2
    EOF
    1
    The ownerReferences section is optional.
    2
    where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: ova
      url:  <nfs_server:/nfs_path> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source:
            id: <source_network_id> 2
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    Specify the OVA network Universal Unique ID (UUID).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            name:  Dummy storage for source provider <provider_name> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/konveyor/hook-runner
      playbook: |
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF

    where:

    playbook refers to an optional Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 2
        network: 3
          name: <network_map> 4
          namespace: <namespace>
        storage: 5
          name: <storage_map> 6
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 7
        - id: <source_vm> 8
        - name: <source_vm>
          hooks: 9
            - hook:
                namespace: <namespace>
                name: <hook> 10
              step: <step> 11
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    You can use either the id or the name parameter to specify the source VMs.
    8
    Specify the OVA VM UUID.
    9
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10
    Specify the name of the Hook CR.
    11
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  2. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

5.2.5. Migrating from a Red Hat OpenShift Virtualization source provider

You can use a Red Hat OpenShift Virtualization provider as a source provider as well as a destination provider.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: openshift
        createdForResourceType: providers
    type: Opaque
    stringData:
      token: <token> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify a token for a service account with cluster-admin privileges. If both token and url are left blank, the local OpenShift cluster is used.
    3
    Specify the user password.
    4
    Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the URL of the endpoint of the API server.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: openshift
      url: <api_end_point> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    Specify the URL of the endpoint of the API server.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source:
            name: <network_name>
            type: pod
        - destination:
            name: <network_attachment_definition> 2
            namespace: <network_attachment_definition_namespace> 3
            type: multus
          source:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    Specify a network attachment definition for each additional OpenShift Virtualization network. Specify the namespace either by using the namespace property or with a name built as follows: <network_namespace>/<network_name>.
    3
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            name: <storage_class>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/konveyor/hook-runner
      playbook: |
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF

    where:

    playbook refers to an optional Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 2
        network: 3
          name: <network_map> 4
          namespace: <namespace>
        storage: 5
          name: <storage_map> 6
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms:
        - name: <source_vm>
          namespace: <namespace>
          hooks: 7
            - hook:
                namespace: <namespace>
                name: <hook> 8
              step: <step> 9
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    8
    Specify the name of the Hook CR.
    9
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  2. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

5.3. Canceling a migration

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 1
    1
    Specify the name of the Migration CR.

Canceling the migration of individual VMs

  1. Add the individual VMs to the spec.cancel block of the Migration manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 1
      - id: vm-203
      - name: rhel8-vm
    EOF
    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.