Questo contenuto non è disponibile nella lingua selezionata.

Chapter 5. Migrating virtual machines from the command line


You can migrate virtual machines to OpenShift Virtualization from the command line.

Important

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

Expand
Table 5.1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

  • Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
  • Attach providers created by administrators to storage maps, network maps, and migration plans
  • Not be able to create providers or to change system settings
Expand
Table 5.2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyer.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyer.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyer.io

hooks

get, list, watch

forklift.konveyer.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyer.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyer.io

storagemaps

get, list, watch

forklift.konveyer.io

forkliftcontrollers

Note

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

5.2. Retrieving a VMware vSphere moRef

When you migrate VMs with a VMware vSphere source provider using Migration Toolkit for Virtualization (MTV) from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

Procedure

  1. Retrieve the routes for the project:

    oc get route -n openshift-mtv
    Copy to Clipboard Toggle word wrap
  2. Retrieve the Inventory service route:

    $ oc get route <inventory_service> -n openshift-mtv
    Copy to Clipboard Toggle word wrap
  3. Retrieve the access token:

    $ TOKEN=$(oc whoami -t)
    Copy to Clipboard Toggle word wrap
  4. Retrieve the moRef of a VMware vSphere provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
    Copy to Clipboard Toggle word wrap
  5. Retrieve the datastores of a VMware vSphere source provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
    Copy to Clipboard Toggle word wrap

    Example output

    [
      {
        "id": "datastore-11",
        "parent": {
          "kind": "Folder",
          "id": "group-s5"
        },
        "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
        "revision": 46,
        "name": "v2v_general_porpuse_ISCSI_DC",
        "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
      },
      {
        "id": "datastore-730",
        "parent": {
          "kind": "Folder",
          "id": "group-s5"
        },
        "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
        "revision": 46,
        "name": "f01-h27-640-SSD_2",
        "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
      },
     ...
    Copy to Clipboard Toggle word wrap

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

5.3. Migrating virtual machines

You migrate virtual machines (VMs) from the command line (CLI) by creating MTV custom resources (CRs).

Important

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

Important

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Note

Migration using OpenStack source providers only supports VMs that use only Cinder volumes.

Prerequisites

  • VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.
  • Red Hat Virtualization (RHV) only: If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
Note
  • Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
  • LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.
  • Migration of Fibre Channel LUNs is not supported.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 
    1
    
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: <provider_type> 
    2
    
        createdForResourceType: providers
    type: Opaque
    stringData: 
    3
    
      user: <user> 
    4
    
      password: <password> 
    5
    
      insecureSkipVerify: <true/false> 
    6
    
      domainName: <domain_name> 
    7
    
      projectName: <project_name> 
    8
    
      regionName: <region name> 
    9
    
      cacert: | 
    10
    
        <ca_certificate>
      url: <api_end_point> 
    11
    
      thumbprint: <vcenter_fingerprint> 
    12
    
      token: <service_account_bearer_token> 
    13
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    The ownerReferences section is optional.
    2
    Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, ova, and openshift. This label is needed to verify the credentials are correct when the remote system is accessible and, for RHV, to retrieve the Manager CA certificate when a third-party certificate is specified.
    3
    The stringData section for OVA is different and is described in a note that follows the description of the Secret manifest.
    4
    Specify the vCenter user, the RHV Manager user, or the OpenStack user.
    5
    Specify the user password.
    6
    Specify <true> to skip certificate verification, which proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. Specifying <false> verifies the certificate.
    7
    OpenStack only: Specify the domain name.
    8
    OpenStack only: Specify the project name.
    9
    OpenStack only: Specify the name of the OpenStack region.
    10
    RHV and OpenStack only: For RHV, enter the Manager CA certificate unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. For OpenStack, enter the CA certificate for connecting to the source environment. The certificate is not used when insecureSkipVerify is set to <true>.
    11
    Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for RHV, or https://<identity_service>/v3 for OpenStack.
    12
    VMware only: Specify the vCenter SHA-1 fingerprint.
    13
    OpenShift only: Token for a service account with cluster-admin privileges.
    Note

    The stringData section for an OVA Secret manifest is as follows:

    stringData:
      url: <nfs_server:/nfs_path>
    Copy to Clipboard Toggle word wrap

    where:
    nfs_server: An IP or hostname of the server where the share was created.
    nfs_path : The path on the server where the OVA files are stored.

  2. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: <provider_type> 
    1
    
      url: <api_end_point> 
    2
    
      settings:
        vddkInitImage: <registry_route_or_server_path>/vddk:<tag> 
    3
    
      secret:
        name: <secret> 
    4
    
        namespace: <namespace>
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, ova, and openshift.
    2
    Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for RHV, or https://<identity_service>/v3 for OpenStack.
    3
    VMware only: Specify the VDDK image that you created.
    4
    Specify the name of provider Secret CR.
  3. VMware only: Create a Host manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: <namespace>
    spec:
      provider:
        namespace: <namespace>
        name: <source_provider> 
    1
    
      id: <source_host_mor> 
    2
    
      ipAddress: <source_network_ip> 
    3
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the VMware Provider CR.
    2
    Specify the managed object reference (moRef) of the VMware host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3
    Specify the IP address of the VMware migration network.
  4. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 
    1
    
          source: 
    2
    
            id: <source_network_id> 
    3
    
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 
    4
    
            namespace: <network_attachment_definition_namespace> 
    5
    
            type: multus
          source:
            name: <network_attachment_definition> 
    6
    
            namespace: <network_attachment_definition_namespace> 
    7
    
            type: multus 
    8
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network.
    3
    Specify the VMware network moRef, the RHV network UUID, or the OpenStack network UUID. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    4
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    5
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
    6
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    7
    Required only when type is multus. Here, namespace can either be specified using the namespace property or with a name built as follows: <network_namespace>/<network_name>.
    8
    OpenShift only.
  5. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 
    1
    
          source:
            id: <source_datastore> 
    2
    
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_datastore>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the VMware datastore moRef, the RHV storage domain UUID, or the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    Note

    For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>".

  6. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/konveyor/hook-runner 
    1
    
      playbook: | 
    2
    
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF
    Copy to Clipboard Toggle word wrap
    1
    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
    2
    Optional: Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.
  7. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 
    1
    
      namespace: <namespace>
    spec:
      warm: true 
    2
    
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 
    3
    
        network: 
    4
    
          name: <network_map> 
    5
    
          namespace: <namespace>
        storage: 
    6
    
          name: <storage_map> 
    7
    
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 
    8
    
        - id: <source_vm> 
    9
    
        - name: <source_vm>
          namespace: <namespace> 
    10
    
          hooks: 
    11
    
            - hook:
                namespace: <namespace>
                name: <hook> 
    12
    
              step: <step> 
    13
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the Plan CR.
    2
    Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    3
    Specify only one network map and one storage map per plan.
    4
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    5
    Specify the name of the NetworkMap CR.
    6
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    7
    Specify the name of the StorageMap CR.
    8
    For all source providers except for OpenShift Virtualization, you can use either the id or the name parameter to specify the source VMs.
    OpenShift Virtualization source provider only: You can use only the name parameter, not the id. parameter to specify the source VMs.
    9
    Specify the VMware VM moRef, RHV VM UUID or the OpenStack VM UUID. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    10
    OpenShift Virtualization source provider only.
    11
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12
    Specify the name of the Hook CR.
    13
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  8. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration> 
    1
    
      namespace: <namespace>
    spec:
      plan:
        name: <plan> 
    2
    
        namespace: <namespace>
      cutover: <cutover_time> 
    3
    
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the Migration CR.
    2
    Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.
    3
    Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

  9. Retrieve the Migration CR to monitor the progress of the migration:

    $ oc get migration/<migration> -n <namespace> -o yaml
    Copy to Clipboard Toggle word wrap

5.4. Obtaining the SHA-1 fingerprint of a vCenter host

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

Procedure

  • Run the following command:

    $ openssl s_client \
        -connect <vcenter_host>:443 \ 
    1
    
        < /dev/null 2>/dev/null \
        | openssl x509 -fingerprint -noout -in /dev/stdin \
        | cut -d '=' -f 2
    Copy to Clipboard Toggle word wrap
    1
    Specify the IP address or FQDN of the vCenter host.

    Example output

    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    Copy to Clipboard Toggle word wrap

5.5. Canceling a migration

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify the name of the Migration CR.

Canceling the migration of individual VMs

  1. Add the individual VMs to the spec.cancel block of the Migration manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: <namespace>
    ...
    spec:
      cancel:
      - id: vm-102 
    1
    
      - id: vm-203
      - name: rhel8-vm
    EOF
    Copy to Clipboard Toggle word wrap
    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n <namespace> -o yaml
    Copy to Clipboard Toggle word wrap
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat