Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Migrating virtual machines from the command line
You can migrate virtual machines to OpenShift Virtualization from the command line.
You must ensure that all prerequisites are met.
5.1. Permissions needed by non-administrators to work with migration plan components Link kopierenLink in die Zwischenablage kopiert!
If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).
By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.
For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:
| Role | Description |
|---|---|
|
| Can view migration plans but not to create, delete or modify them |
|
|
Can create, delete or modify (all parts of |
|
|
All |
Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).
As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:
- Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
- Attach providers created by administrators to storage maps, network maps, and migration plans
- Not be able to create providers or to change system settings
| Actions | API group | Resource |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Empty string |
|
Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.
5.2. Retrieving a VMware vSphere moRef Link kopierenLink in die Zwischenablage kopiert!
When you migrate VMs with a VMware vSphere source provider using Migration Toolkit for Virtualization (MTV) from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.
You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.
Procedure
Retrieve the routes for the project:
oc get route -n openshift-mtvRetrieve the
Inventoryservice route:$ oc get route <inventory_service> -n openshift-mtvRetrieve the access token:
$ TOKEN=$(oc whoami -t)Retrieve the moRef of a VMware vSphere provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere -kRetrieve the datastores of a VMware vSphere source provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -kExample output
[ { "id": "datastore-11", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC", "revision": 46, "name": "v2v_general_porpuse_ISCSI_DC", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11" }, { "id": "datastore-730", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/f01-h27-640-SSD_2", "revision": 46, "name": "f01-h27-640-SSD_2", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730" }, ...
In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.
5.3. Migrating virtual machines Link kopierenLink in die Zwischenablage kopiert!
You migrate virtual machines (VMs) from the command line (CLI) by creating MTV custom resources (CRs). The CRs and the migration procedure vary by source provider.
You must specify a name for cluster-scoped CRs.
You must specify both a name and a namespace for namespace-scoped CRs.
To migrate to or from an OpenShift cluster that is different from the one the migration plan is defined on, you must have an OpenShift Virtualization service account token with cluster-admin privileges.
5.3.1. Migrating from a VMware vSphere source provider Link kopierenLink in die Zwischenablage kopiert!
You can migrate from a VMware vSphere source provider by using the CLI.
Procedure
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences:1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: vsphere createdForResourceType: providers type: Opaque stringData: user: <user>2 password: <password>3 insecureSkipVerify: <"true"/"false">4 cacert: |5 <ca_certificate> url: <api_end_point>6 EOF- 1
- The
ownerReferencessection is optional. - 2
- Specify the vCenter user or the ESX/ESXi user.
- 3
- Specify the password of the vCenter user or the ESX/ESXi user.
- 4
- Specify
"true"to skip certificate verification, specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
- 6
- Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example,
https://<vCenter_host>/sdk.
Create a
Providermanifest for the source provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: vsphere url: <api_end_point>1 settings: vddkInitImage: <VDDK_image>2 sdkEndpoint: vcenter3 secret: name: <secret>4 namespace: <namespace> EOF- 1
- Specify the URL of the API endpoint, for example,
https://<vCenter_host>/sdk. - 2
- Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
- 3
- Options:
vcenteroresxi. - 4
- Specify the name of the provider
SecretCR.
Create a
Hostmanifest:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: <namespace> spec: provider: namespace: <namespace> name: <source_provider>1 id: <source_host_mor>2 ipAddress: <source_network_ip>3 EOF- 1
- Specify the name of the VMware vSphere
ProviderCR. - 2
- Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
- 3
- Specify the IP address of the VMware vSphere migration network.
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod1 source:2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition>3 namespace: <network_attachment_definition_namespace>4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef. - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode>1 source: id: <source_datastore>2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
ReadWriteOnceandReadWriteMany. - 2
- Specify the VMware vSphere datastore moRef. For example,
f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOFwhere:
playbookrefers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, theimagemust behook-runner.NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan>1 namespace: <namespace> spec: warm: false2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map:3 network:4 name: <network_map>5 namespace: <namespace> storage:6 name: <storage_map>7 namespace: <namespace> targetNamespace: <target_namespace> vms:8 - id: <source_vm>9 - name: <source_vm> hooks:10 - hook: namespace: <namespace> name: <hook>11 step: <step>12 EOF- 1
- Specify the name of the
PlanCR. - 2
- Specify whether the migration is warm -
true- or cold -false. If you specify a warm migration without specifying a value for thecutoverparameter in theMigrationmanifest, only the precopy stage will run. - 3
- Specify only one network map and one storage map per plan.
- 4
- Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 5
- Specify the name of the
NetworkMapCR. - 6
- Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 7
- Specify the name of the
StorageMapCR. - 8
- You can use either the
idor thenameparameter to specify the source VMs.
- 9
- Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
- 10
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 11
- Specify the name of the
HookCR. - 12
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFNoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
5.3.2. Migrating from a Red Hat Virtualization source provider Link kopierenLink in die Zwischenablage kopiert!
You can migrate from a Red Hat Virtualization (RHV) source provider by using the CLI.
Prerequisites
If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
- Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
- LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.
Procedure
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences:1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ovirt createdForResourceType: providers type: Opaque stringData: user: <user>2 password: <password>3 insecureSkipVerify: <"true"/"false">4 cacert: |5 <ca_certificate> url: <api_end_point>6 EOF- 1
- The
ownerReferencessection is optional. - 2
- Specify the RHV Manager user.
- 3
- Specify the user password.
- 4
- Specify
"true"to skip certificate verification, specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- Enter the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
- 6
- Specify the API endpoint URL, for example,
https://<engine_host>/ovirt-engine/api.
Create a
Providermanifest for the source provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ovirt url: <api_end_point>1 secret: name: <secret>2 namespace: <namespace> EOF
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod1 source:2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition>3 namespace: <network_attachment_definition_namespace>4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the RHV network Universal Unique ID (UUID). - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode>1 source: id: <source_storage_domain>2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFOptional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOFwhere:
playbookrefers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, theimagemust behook-runner.NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan>1 namespace: <namespace> preserveClusterCpuModel: true2 spec: warm: false3 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map:4 network:5 name: <network_map>6 namespace: <namespace> storage:7 name: <storage_map>8 namespace: <namespace> targetNamespace: <target_namespace> vms:9 - id: <source_vm>10 - name: <source_vm> hooks:11 - hook: namespace: <namespace> name: <hook>12 step: <step>13 EOF- 1
- Specify the name of the
PlanCR. - 2
- See note below.
- 3
- Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the
cutoverparameter in theMigrationmanifest, only the precopy stage will run. - 4
- Specify only one network map and one storage map per plan.
- 5
- Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 6
- Specify the name of the
NetworkMapCR. - 7
- Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 8
- Specify the name of the
StorageMapCR. - 9
- You can use either the
idor thenameparameter to specify the source VMs. - 10
- Specify the RHV VM UUID.
- 11
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 12
- Specify the name of the
HookCR. - 13
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Note-
If the migrated machines is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of
preserveClusterCpuModel. If the migrated machine is not set with a custom CPU model:
-
If
preserveClusterCpuModelis set to 'true`, MTV checks the CPU model of the VM when it runs in RHV, based on the cluster’s configuration, and then sets the migrated VM with that CPU model. -
If
preserveClusterCpuModelis set to 'false`, MTV does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
-
If
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFNoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
5.3.3. Migrating from an OpenStack source provider Link kopierenLink in die Zwischenablage kopiert!
You can migrate from an OpenStack source provider by using the CLI.
Procedure
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences:1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openstack createdForResourceType: providers type: Opaque stringData: user: <user>2 password: <password>3 insecureSkipVerify: <"true"/"false">4 domainName: <domain_name> projectName: <project_name> regionName: <region_name> cacert: |5 <ca_certificate> url: <api_end_point>6 EOF- 1
- The
ownerReferencessection is optional. - 2
- Specify the OpenStack user.
- 3
- Specify the user OpenStack password.
- 4
- Specify
"true"to skip certificate verification, specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
- 6
- Specify the API endpoint URL, for example,
https://<identity_service>/v3.
Create a
Providermanifest for the source provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openstack url: <api_end_point>1 secret: name: <secret>2 namespace: <namespace> EOF
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod1 source:2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition>3 namespace: <network_attachment_definition_namespace>4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the OpenStack network UUID. - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode>1 source: id: <source_volume_type>2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFOptional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOFwhere:
playbookrefers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, theimagemust behook-runner.NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan>1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map:2 network:3 name: <network_map>4 namespace: <namespace> storage:5 name: <storage_map>6 namespace: <namespace> targetNamespace: <target_namespace> vms:7 - id: <source_vm>8 - name: <source_vm> hooks:9 - hook: namespace: <namespace> name: <hook>10 step: <step>11 EOF- 1
- Specify the name of the
PlanCR. - 2
- Specify only one network map and one storage map per plan.
- 3
- Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 4
- Specify the name of the
NetworkMapCR. - 5
- Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 6
- Specify the name of the
StorageMapCR. - 7
- You can use either the
idor thenameparameter to specify the source VMs. - 8
- Specify the OpenStack VM UUID.
- 9
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 10
- Specify the name of the
HookCR. - 11
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFNoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
5.3.4. Migrating from an Open Virtual Appliance (OVA) source provider Link kopierenLink in die Zwischenablage kopiert!
You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the CLI.
Procedure
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences:1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ova createdForResourceType: providers type: Opaque stringData: url: <nfs_server:/nfs_path>2 EOF
Create a
Providermanifest for the source provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ova url: <nfs_server:/nfs_path>1 secret: name: <secret>2 namespace: <namespace> EOF
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod1 source: id: <source_network_id>2 - destination: name: <network_attachment_definition>3 namespace: <network_attachment_definition_namespace>4 type: multus source: id: <source_network_id> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
podandmultus. - 2
- Specify the OVA network Universal Unique ID (UUID).
- 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode>1 source: name: Dummy storage for source provider <provider_name>2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
ReadWriteOnceandReadWriteMany. - 2
- For OVA, the
StorageMapcan map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOFwhere:
playbookrefers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, theimagemust behook-runner.NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan>1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map:2 network:3 name: <network_map>4 namespace: <namespace> storage:5 name: <storage_map>6 namespace: <namespace> targetNamespace: <target_namespace> vms:7 - id: <source_vm>8 - name: <source_vm> hooks:9 - hook: namespace: <namespace> name: <hook>10 step: <step>11 EOF- 1
- Specify the name of the
PlanCR. - 2
- Specify only one network map and one storage map per plan.
- 3
- Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 4
- Specify the name of the
NetworkMapCR. - 5
- Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 6
- Specify the name of the
StorageMapCR. - 7
- You can use either the
idor thenameparameter to specify the source VMs. - 8
- Specify the OVA VM UUID.
- 9
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 10
- Specify the name of the
HookCR. - 11
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFNoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
5.3.5. Migrating from a Red Hat OpenShift Virtualization source provider Link kopierenLink in die Zwischenablage kopiert!
You can use a Red Hat OpenShift Virtualization provider as either a source provider or as a destination provider.
Procedure
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences:1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openshift createdForResourceType: providers type: Opaque stringData: token: <token>2 password: <password>3 insecureSkipVerify: <"true"/"false">4 cacert: |5 <ca_certificate> url: <api_end_point>6 EOF- 1
- The
ownerReferencessection is optional. - 2
- Specify a token for a service account with
cluster-adminprivileges. If bothtokenandurlare left blank, the local OpenShift cluster is used. - 3
- Specify the user password.
- 4
- Specify
"true"to skip certificate verification, specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
- 6
- Specify the URL of the endpoint of the API server.
Create a
Providermanifest for the source provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openshift url: <api_end_point>1 secret: name: <secret>2 namespace: <namespace> EOF
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod1 source: name: <network_name> type: pod - destination: name: <network_attachment_definition>2 namespace: <network_attachment_definition_namespace>3 type: multus source: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
podandmultus. - 2
- Specify a network attachment definition for each additional OpenShift Virtualization network. Specify the
namespaceeither by using thenamespace propertyor with a name built as follows:<network_namespace>/<network_name>. - 3
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode>1 source: name: <storage_class> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF- 1
- Allowed values are
ReadWriteOnceandReadWriteMany.
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOFwhere:
playbookrefers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, theimagemust behook-runner.NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan>1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map:2 network:3 name: <network_map>4 namespace: <namespace> storage:5 name: <storage_map>6 namespace: <namespace> targetNamespace: <target_namespace> vms: - name: <source_vm> namespace: <namespace> hooks:7 - hook: namespace: <namespace> name: <hook>8 step: <step>9 EOF- 1
- Specify the name of the
PlanCR. - 2
- Specify only one network map and one storage map per plan.
- 3
- Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 4
- Specify the name of the
NetworkMapCR. - 5
- Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 6
- Specify the name of the
StorageMapCR. - 7
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 8
- Specify the name of the
HookCR. - 9
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFNoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
5.4. Canceling a migration Link kopierenLink in die Zwischenablage kopiert!
You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).
Canceling an entire migration
Delete the
MigrationCR:$ oc delete migration <migration> -n <namespace>1 - 1
- Specify the name of the
MigrationCR.
Canceling the migration of individual VMs
Add the individual VMs to the
spec.cancelblock of theMigrationmanifest:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-1021 - id: vm-203 - name: rhel8-vm EOF- 1
- You can specify a VM by using the
idkey or thenamekey.
The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.Retrieve the
MigrationCR to monitor the progress of the remaining VMs:$ oc get migration/<migration> -n <namespace> -o yaml