Chapter 3. Migrating virtual machines to OpenShift Virtualization
You can migrate virtual machines (VMs) to OpenShift Virtualization by using the MTV web console or the command line interface (CLI).
You can run a cold or a warm migration. For details, see Warm migration.
3.1. Migration environment requirements
Check your migration environment to ensure that the following requirements are met.
VMware environment requirements
- VMware vSphere must be version 6.5 or later.
- If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.
Virtual machines:
- VMware Tools is installed.
- ISO/CDROM disks are unmounted.
- Each NIC must contain no more than one IPv4 and/or one IPv6 address.
-
VM name contains only lowercase letters (
a-z
), numbers (0-9
), or hyphens (-
), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.
), or special characters. - VM name does not duplicate the name of a VM in the OpenShift Virtualization environment.
-
Operating system is certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with
virt-v2v
.
Network requirements
- IP addresses, VLANs, and other network configuration settings must not be changed before or after migration. The MAC addresses of the virtual machines are preserved during migration.
- Uninterrupted and reliable network connections between the clusters and the replication repository.
- Network ports enabled in the firewall rules.
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 | TCP | OpenShift nodes | VMware vCenter | VMware provider inventory Disk transfer authentication |
443 | TCP | OpenShift nodes | VMware ESXi hosts | Disk transfer authentication |
902 | TCP | OpenShift nodes | VMware ESXi hosts | Disk transfer data copy |
3.1.1. Increasing the NFC service memory of an ESXi host
If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host to enable additional connections for migrations. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.
Procedure
- Log in to the ESXi host as root.
Change the value of
maxMemory
to1000000000
in/etc/vmware/hostd/config.xml
:... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...
Restart
hostd
:# /etc/init.d/hostd restart
You do not need to reboot the host.
3.2. Migrating virtual machines by using the MTV web console
You can migrate virtual machines to OpenShift Virtualization by using the MTV web console.
3.2.1. Adding providers
You can add VMware and OpenShift Virtualization providers by using the MTV web console.
3.2.1.1. Adding a VMware source provider
You can add a VMware source provider by using the MTV web console.
Procedure
- In the MTV web console, click Providers.
- Click Add provider.
- Select VMware from the Type list.
Fill in the following fields:
- Name: Name to display in the list of providers
- Hostname or IP address: vCenter host name or IP address
-
Username: vCenter admin user name, for example,
administrator@vsphere.local
- Password: vCenter admin password
SHA-1 fingerprint: vCenter SHA-1 fingerprint
To obtain the vCenter SHA-1 fingerprint, enter the following command:
$ openssl s_client \ -connect <vcenter.example.com>:443 < /dev/null 2>/dev/null \ 1 | openssl x509 -fingerprint -noout -in /dev/stdin \ | cut -d '=' -f 2
- 1
- Specify the vCenter host name.
Click Add to add and save the provider.
The VMware provider appears in the list of providers.
Selecting a migration network for a VMware provider
You can select a migration network in the MTV web console for a VMware source provider to reduce risk to the VMware environment and to improve performance.
The default migration network is the management network. However, using the management network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the VMware platform because the disk transfer operation might saturate the network and impede communication between vCenter and the ESXi hosts.
Prerequisites
- The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.
The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway.
NoteThe source virtual disks are copied by a pod that is connected to the pod network of the target namespace.
- The migration network must have jumbo frames enabled.
- You must have administrator privileges for each ESXi host.
Procedure
- In the MTV web console, click Providers
- Click VMware.
- Click the host number in the Hosts column beside a VMware provider to view a list of hosts.
- Select one or more hosts and click Select migration network.
Complete the following fields:
Network: Select the migration network.
You can clear the migration network selection by selecting the default management network.
-
ESXi host admin username: Specify the ESXi host admin user name, for example,
root
. - ESXi host admin password: Specify the ESXi host password.
- Click Save.
Verify that the status of each host is Ready.
If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.
3.2.1.2. Adding an OpenShift Virtualization provider
You can add an OpenShift Virtualization provider to the MTV web console in addition to the default OpenShift Virtualization provider, which is the provider where you installed MTV.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-admin
privileges.
Procedure
- In the MTV web console, click Providers.
- Click Add provider.
- Select OpenShift Virtualization from the Type list.
Complete the following fields:
- Cluster name: Specify the cluster name to display in the list of target providers.
- URL: Specify the API endpoint of the cluster.
-
Service account token: Specify the
cluster-admin
service account token.
- Click Check connection to verify the credentials.
Click Add.
The provider appears in the list of providers.
Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the MTV web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the MTV web console, click Providers.
- Click OpenShift Virtualization.
- Select a provider and click Select migration network.
- Select a network from the list of available networks and click Select.
- Click the network number in the Networks column beside the provider to verify that the selected network is the default migration network.
3.2.2. Creating a network mapping
You can create one or more network mappings by using the MTV web console to map source networks to OpenShift Virtualization networks.
You cannot map an opaque network, typically managed by NSX, to an OpenShift Virtualization network.
Prerequisites
- Source and target providers added to the web console.
- If you map more than one source and target network, you must create a network attachment definition for each additional target network.
Procedure
-
Click Mappings
Network. - Click Create mapping.
Complete the following fields:
- Name: Enter a name to display in the network mappings list.
- Source provider: Select a source provider.
- Target provider: Select a target provider.
- Source networks: Select a source network.
- Target namespaces/networks: Select a target network.
- Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.
- If you create an additional network mapping, select the network attachment definition as the target network.
Click Create.
The network mapping is displayed on the Network mappings screen.
3.2.3. Creating a storage mapping
You can create a storage mapping by using the MTV web console to map source data stores to OpenShift Virtualization storage classes.
Prerequisites
- Source and target providers added to the web console.
- Local and shared persistent storage that support VM migration.
Procedure
-
Click Mappings
Storage. - Click Create mapping.
Complete the following fields:
- Name: Enter a name to display in the storage mappings list.
- Source provider: Select a source provider.
- Target provider: Select a target provider.
- Source datastores: Select a source data store.
- Target storage classes: Select a target storage class.
- Optional: Click Add to create additional storage mappings or to map multiple data stores to a single storage class.
Click Create.
The mapping is displayed on the Storage mappings screen.
3.2.4. Creating a migration plan
You can create a migration plan by using the MTV web console.
A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.
Prerequisites
-
You must add the VDDK image to the
spec.vddkInitImage
field of theHyperConverged
custom resource (CR). - You must add a source provider to the web console.
- If the target provider is not the OpenShift Virtualization cluster on which you installed MTV, you must add a target provider.
- If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
- If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host.
Procedure
- In the web console, click Migration plans and then click Create migration plan.
Complete the following fields:
- Plan name: Enter a migration plan name to display in the migration plan list.
- Plan description: Optional. Brief description of the migration plan.
- Source provider: Select a source provider.
- Target provider: Select a target provider.
- Target namespace: You can type to search for an existing target namespace or create a new namespace.
You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and clicking Select.
If you defined a migration transfer network for the OpenShift Virtualization provider and if the network is in the target namespace, that network is the default network for all migration plans. Otherwise, the
pod
network is used.
- Click Next.
- Click By clusters and hosts or By folders, select clusters, hosts, or folders to filter the list of VMs, and then click Next.
- Select the VMs to migrate and then click Next.
Select an existing network mapping or create a new network mapping.
To create a new network mapping:
- Select a target network for each source network.
- Optional. Select Save mapping to use again and enter a network mapping name.
- Click Next.
Select an existing storage mapping or create a new storage mapping.
To create a new storage mapping:
- Select a target storage class for each source data store.
- Optional. Select Save mapping to use again and enter a storage mapping name.
- Click Next.
Select Cold migration or Warm migration and click Next.
- Cold migration: The source VMs are stopped while the data is copied.
- Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.
Review your migration plan and click Finish.
The migration plan is saved in the migration plan list.
3.2.5. Running a migration plan
You can run a migration plan and view its progress in the MTV web console.
Prerequisites
- Valid migration plan.
Procedure
Click Migration plans.
The Migration plans list displays the source and target providers, the number of VMs being migrated, and the status of the plan.
Click Start beside a migration plan to start the migration.
If the migration type is Warm, the precopy stage starts.
- Click Cutover beside a warm migration plan to complete the migration.
Expand a migration plan to view the migration details.
The migration details screen displays the migration start and end time, the amount of data copied, and a progress pipeline for each VM being migrated.
- Expand a VM to view the migration steps, elapsed time of each step, and the state.
3.2.6. Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the MTV web console.
Procedure
- Click Migration Plans.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
3.3. Migrating virtual machines from the command line interface
You can migrate virtual machines (VMs) from the command line (CLI) by creating the following custom resources (CRs):
-
Secret
contains the VMware provider credentials. -
Provider
contains the VMware provider details. -
Host
contains the VMware host details. -
NetworkMap
maps the source and destination networks. -
StorageMap
maps the source and destination storage. -
Plan
contains a list of VMs to migrate and specifies whether the migration is cold or warm. ThePlan
references the providers and maps. Migration
runs thePlan
. If the migration is warm, it specifies the cutover time.You can associate multiple
Migration
CRs with a singlePlan
CR. If a migration does not complete, you can create a newMigration
CR, without changing thePlan
CR, to migrate the remaining VMs.
The term destination in the API is the same as target in the web console.
You must specify a name for cluster-scoped CRs.
You must specify both a name and a namespace for namespace-scoped CRs.
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges. - The OpenShift CLI must be installed.
- If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.
-
You must add the VDDK image to the
spec.vddkInitImage
field of theHyperConverged
custom resource (CR). - If you are performing a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
- If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host.
Procedure
Obtain the vCenter SHA-1 fingerprint:
$ openssl s_client \ -connect <www.example.com>:443 \ 1 < /dev/null 2>/dev/null \ | openssl x509 -fingerprint -noout -in /dev/stdin \ | cut -d '=' -f 2
- 1
- Specify the vCenter name.
Example output
01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
Create a
Secret
CR manifest for the VMware provider:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <vmware_secret> namespace: openshift-mtv type: Opaque stringData: user: <vcenter_user> 1 password: <vcenter_password> 2 thumbprint: <vcenter_fingerprint> 3 EOF
Create a
Provider
CR manifest for the VMware provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <vmware_provider> namespace: openshift-mtv spec: type: vsphere url: <api_end_point> 1 secret: name: <vmware_secret> 2 namespace: openshift-mtv EOF
Create a
Host
CR manifest for the VMware host:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: openshift-mtv spec: provider: namespace: openshift-mtv name: <source_provider> 1 id: <source_host_mor> 2 ipAddress: <source_network_ip> 3 EOF
Create a
NetworkMap
CR manifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: openshift-mtv spec: map: - destination: type: pod 1 source: id: <source_network_mor> 2 - destination: type: multus name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 source: id: <source_network_mor> provider: source: name: <vmware_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv EOF
Create a
StorageMap
CR manifest:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: openshift-mtv spec: map: - destination: storageClass: <storage_class> source: id: <source_datastore_mor> 1 provider: source: name: <vmware_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv EOF
- 1
- Specify the managed object reference of the VMware data storage.
Create a
Plan
CR manifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: openshift-mtv spec: provider: source: name: <vmware_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv warm: true 2 map: network: 3 name: <network_map> 4 namespace: openshift-mtv storage: name: <storage_map> 5 namespace: openshift-mtv targetNamespace: openshift-mtv vms: 6 - id: <source_vm_mor> 7 - name: <source_vm> EOF
- 1
- Specify the name of the
Plan
CR. - 2
- Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the
cutover
parameter in theMigration
CR manifest, only the precopy stage will run. - 3
- You can create multiple network mappings for source and destination networks.
- 4
- Specify the name of the
NetworkMap
CR. - 5
- Specify the name of the
StorageMap
CR. - 6
- You can use either the
id
or thename
parameter to specify the source VMs. - 7
- Specify the managed object reference of the VMware VM.
Optional: To change the time interval between the CBT snapshots for warm migration, patch the
vm-import-controller-config
config map:$ oc patch configmap/vm-import-controller-config \ -n openshift-cnv -p '{"data": \ {"warmImport.intervalMinutes": "<interval>"}}' 1
- 1
- Specify the time interval in minutes. The default value is
60
.
Create a
Migration
CR manifest to run thePlan
CR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> 1 namespace: openshift-mtv spec: plan: name: <plan> 2 namespace: openshift-mtv cutover: <cutover_time> 3 EOF
- 1
- Specify the name of the
Migration
CR. - 2
- Specify the name of the
Plan
CR that you are running. TheMigration
CR creates aVirtualMachineImport
CR for each VM that is migrated. - 3
- Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example,
2021-04-04T01:23:45.678+09:00
.
View the
VirtualMachineImport
pods to monitor the progress of the migration:$ oc get pods -n openshift-mtv
3.3.1. Canceling a migration
You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).
Canceling an entire migration
Delete the
Migration
CR:$ oc delete migration <migration_name> -n openshift-mtv 1
Canceling the migration of individual VMs
Add the individual VMs to the
Migration
CR manifest:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration_name> namespace: openshift-mtv ... spec: cancel: - id: vm-102 1 - id: vm-203 - name: rhel8-vm EOF
View the
VirtualMachineImport
pods to monitor the progress of the remaining VMs:$ oc get pods -n openshift-mtv