Chapter 12. Live migration
12.1. About live migration Copy linkLink copied to clipboard!
Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. Live migration enables smooth transitions during cluster upgrades or any time a node needs to be drained for maintenance or configuration changes.
By default, live migration traffic is encrypted using Transport Layer Security (TLS).
12.1.1. Live migration requirements Copy linkLink copied to clipboard!
Live migration has the following requirements:
-
The cluster must have shared storage with
ReadWriteMany(RWX) access mode. The cluster must have sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default number of migrations that can run in parallel in the cluster is 5.
- If a VM uses a host model CPU, the nodes must support the CPU.
- Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
12.1.2. About live migration permissions Copy linkLink copied to clipboard!
In OpenShift Virtualization 4.19 and later, live migration operations are restricted to users who are explicitly granted the kubevirt.io:migrate cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests, which are represented by VirtualMachineInstanceMigration (VMIM) custom resources.
Cluster administrators can bind the kubevirt.io:migrate role to trusted users or groups at either the namespace or cluster level.
Before OpenShift Virtualization 4.19, namespace administrators had live migration permissions by default. This behavior changed in version 4.19 to prevent unintended or malicious disruptions to infrastructure-critical migration operations.
As a cluster administrator, you can preserve the old behavior by creating a temporary cluster role before updating. After assigning the new role to users, delete the temporary role to enforce the more restrictive permissions. If you have already updated, you can still revert to the old behavior by aggregating the kubevirt.io:migrate role into the admin cluster role.
12.1.3. Preserving pre-4.19 live migration permissions during update Copy linkLink copied to clipboard!
Before you update to OpenShift Virtualization 4.20, you can create a temporary cluster role to preserve the previous live migration permissions until you are ready for the more restrictive default permissions to take effect.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have cluster administrator permissions.
Procedure
Before updating to OpenShift Virtualization 4.20, create a temporary
ClusterRoleobject. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This cluster role is aggregated into the
adminrole before you update OpenShift Virtualization. The update process does not modify it, ensuring the previous behavior is maintained.
Add the cluster role manifest to the cluster by running the following command:
oc apply -f <cluster_role_file_name>.yaml
$ oc apply -f <cluster_role_file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Update OpenShift Virtualization to version 4.20.
Bind the
kubevirt.io:migratecluster role to trusted users or groups by running one of the following commands, replacing<namespace>,<first_user>,<second_user>, and<group_name>with your own values.To bind the role at the namespace level, run the following command:
oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To bind the role at the cluster level, run the following command:
oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When you have bound the
kubevirt.io:migraterole to all necessary users, delete the temporaryClusterRoleobject by running the following command:oc delete clusterrole kubevirt.io:upgrademigrate
$ oc delete clusterrole kubevirt.io:upgrademigrateCopy to Clipboard Copied! Toggle word wrap Toggle overflow After you delete the temporary cluster role, only users with the
kubevirt.io:migraterole can create, delete, and update live migration requests.
12.1.4. Granting live migration permissions Copy linkLink copied to clipboard!
Grant trusted users or groups the ability to create, delete, and update live migration instances.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have cluster administrator permissions.
Procedure
(Optional) To change the default behavior so that namespace administrators always have permission to create, delete, and update live migrations, aggregate the
kubevirt.io:migraterole into theadmincluster role by running the following command:oc label --overwrite clusterrole kubevirt.io:migrate rbac.authorization.k8s.io/aggregate-to-admin=true
$ oc label --overwrite clusterrole kubevirt.io:migrate rbac.authorization.k8s.io/aggregate-to-admin=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the
kubevirt.io:migratecluster role to trusted users or groups by running one of the following commands, replacing<namespace>,<first_user>,<second_user>, and<group_name>with your own values.To bind the role at the namespace level, run the following command:
oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To bind the role at the cluster level, run the following command:
oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.5. VM migration tuning Copy linkLink copied to clipboard!
You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long OpenShift Virtualization attempts to complete the migration before canceling the process. Configure these settings in the HyperConverged custom resource (CR).
If you are migrating multiple VMs per node at the same time, set a bandwidthPerMigration limit to prevent a large or busy VM from using a large portion of the node’s network bandwidth. By default, the bandwidthPerMigration value is 0, which means unlimited.
A large VM running a heavy workload (for example, database processing), with higher memory dirty rates, requires a higher bandwidth to complete the migration.
Post copy mode, when enabled, triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. This can impact performance during the transfer.
Post copy mode should not be used for critical data, or with unstable networks.
12.1.6. Common live migration tasks Copy linkLink copied to clipboard!
You can perform the following live migration tasks:
- Configure live migration settings
- Configure live migration for heavy workloads
- Initiate and cancel live migration
- Monitor the progress of all live migrations in the Migrations tab of the OpenShift Container Platform web console.
- View VM migration metrics in the Metrics tab of the web console.
12.1.7. Additional resources Copy linkLink copied to clipboard!
12.2. Configuring live migration Copy linkLink copied to clipboard!
You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster.
You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs).
12.2.1. Configuring live migration limits and timeouts Copy linkLink copied to clipboard!
Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
HyperConvergedCR and add the necessary live migration parameters:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of
2048Mimeans 2048 MiB/s. Default:0, which is unlimited. - 2
- The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the
Migration MethodisBlockMigration, the size of the migrating disks is included in the calculation. - 3
- Number of migrations running in parallel in the cluster. Default:
5. - 4
- Maximum number of outbound migrations per node. Default:
2. - 5
- The migration is canceled if memory copy fails to make progress in this time, in seconds. Default:
150. - 6
- If a VM is running a heavy workload and the memory dirty rate is too high, this can prevent the migration from one node to another from converging. To prevent this, you can enable post copy mode. By default,
allowPostCopyis set tofalse.
You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150.
12.2.2. Configure live migration for heavy workloads Copy linkLink copied to clipboard!
When migrating a VM running a heavy workload (for example, database processing) with higher memory dirty rates, you need a higher bandwidth to complete the migration.
If the dirty rate is too high, the migration from one node to another does not converge. To prevent this, enable post copy mode.
Post copy mode triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime.
Configure live migration for heavy workloads by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
HyperConvergedCR and add the necessary parameters for migrating heavy workloads:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Bandwidth limit of each migration, where the value is the quantity of bytes per second. The default is
0, which is unlimited. - 2
- The migration is canceled if it is not completed in this time, and triggers post copy mode, when post copy is enabled. This value is measured in seconds per GiB of memory. You can lower
completionTimeoutPerGiBto trigger post copy mode earlier in the migration process, or raise thecompletionTimeoutPerGiBto trigger post copy mode later in the migration process. - 3
- Number of migrations running in parallel in the cluster. The default is
5. Keeping theparallelMigrationsPerClustersetting low is better when migrating heavy workloads. - 4
- Maximum number of outbound migrations per node. Configure a single VM per node for heavy workloads.
- 5
- The migration is canceled if memory copy fails to make progress in this time. This value is measured in seconds. Increase this parameter for large memory sizes running heavy workloads.
- 6
- Use post copy mode when memory dirty rates are high to ensure the migration converges. Set
allowPostCopytotrueto enable post copy mode.
- Optional: If your main network is too busy for the migration, configure a secondary, dedicated migration network.
Post copy mode can impact performance during the transfer, and should not be used for critical data, or with unstable networks.
12.2.4. Live migration policies Copy linkLink copied to clipboard!
You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels.
You can create live migration policies by using the OpenShift Container Platform web console.
12.2.4.1. Creating a live migration policy by using the CLI Copy linkLink copied to clipboard!
You can create a live migration policy by using the command line. KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels:
-
VM labels such as
size,os, orgpu -
Project labels such as
priority,bandwidth, orhpc-workload
For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy.
If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence.
If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the VM object to which you want to apply a live migration policy, and add the corresponding VM labels.
Open the YAML configuration of the resource:
oc edit vm <vm_name>
$ oc edit vm <vm_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the required label values in the
.spec.template.metadata.labelssection of the configuration. For example, to mark the VM as aproductionVM for the purposes of migration policies, add thekubevirt.io/environment: productionline:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save and exit the configuration.
Configure a
MigrationPolicyobject with the corresponding labels. The following example configures a policy that applies to all VMs that are labeled asproduction:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the migration policy by running the following command:
oc create -f <migration_policy>.yaml
$ oc create -f <migration_policy>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.5. Migrating a VM to a specific node Copy linkLink copied to clipboard!
You can migrate a running virtual machine (VM) to a specific subset of nodes by using the addedNodeSelector field on the VirtualMachineInstanceMigration object. This field lets you apply additional node selection rules for a one-time migration attempt, without affecting the VM configuration or future migrations.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - The VM you want to migrate is running.
-
You have identified the labels of the target nodes. Multiple labels can be specified and are combined with logical
AND. -
The
ocCLI tool is installed.
Procedure
Create a migration manifest YAML file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
vmiName-
Specifies the name of the running VM (for example,
vmi-fedora). addedNodeSelector- Specifies additional constraints for selecting the target node.
Apply the manifest to the cluster by running the following command:
oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If no nodes satisfy the constraints, the migration is declared a failure after a timeout. The VM remains unaffected.
12.3. Initiating and canceling live migration Copy linkLink copied to clipboard!
You can initiate the live migration of a virtual machine (VM) to another node by using the OpenShift Container Platform web console or the command line.
You can cancel a live migration by using the web console or the command line. The VM remains on its original node.
You can also initiate and cancel live migration by using the virtctl migrate <vm_name> and virtctl migrate-cancel <vm_name> commands.
12.3.1. Initiating live migration Copy linkLink copied to clipboard!
12.3.1.1. Initiating live migration by using the web console Copy linkLink copied to clipboard!
You can live migrate a running virtual machine (VM) to a different node in the cluster by using the OpenShift Container Platform web console.
The Migrate action is visible to all users but only cluster administrators can initiate a live migration.
Prerequisites
-
You have the
kubevirt.io:migrateRBAC role or you are a cluster administrator. - The VM is migratable.
- If the VM is configured with a host model CPU, the cluster has an available node that supports the CPU model.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. Take either of the following steps:
-
Click the Options menu
beside the VM you want to migrate, hover over the Migrate option, and select Compute.
- Open the VM details page of the VM you want to migrate, click the Actions menu, hover over the Migrate option, and select Compute.
-
Click the Options menu
In the Migrate Virtual Machine to a different Node dialog box, select either Automatically Selected Node or Specific Node.
- If you selected the Specific Node option, choose a node from the list.
- Click Migrate Virtual Machine.
12.3.1.2. Initiating live migration by using the CLI Copy linkLink copied to clipboard!
You can initiate the live migration of a running virtual machine (VM) by using the command line to create a VirtualMachineInstanceMigration object for the VM.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have the
kubevirt.io:migrateRBAC role or you are a cluster administrator.
Procedure
Create a
VirtualMachineInstanceMigrationmanifest for the VM that you want to migrate:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the object by running the following command:
oc create -f <migration_name>.yaml
$ oc create -f <migration_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
VirtualMachineInstanceMigrationobject triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted.
Verification
Obtain the VM status by running the following command:
oc describe vmi <vm_name> -n <namespace>
$ oc describe vmi <vm_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.2. Canceling live migration Copy linkLink copied to clipboard!
12.3.2.1. Canceling live migration by using the web console Copy linkLink copied to clipboard!
You can cancel the live migration of a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
-
You have the
kubevirt.io:migrateRBAC role or you are a cluster administrator.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. -
Select Cancel Migration on the Options menu
beside a VM.
12.3.2.2. Canceling live migration by using the CLI Copy linkLink copied to clipboard!
Cancel the live migration of a virtual machine by deleting the VirtualMachineInstanceMigration object associated with the migration.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have the
kubevirt.io:migrateRBAC role or you are a cluster administrator.
Procedure
Delete the
VirtualMachineInstanceMigrationobject that triggered the live migration,migration-jobin this example:oc delete vmim migration-job
$ oc delete vmim migration-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4. Enabling cross-cluster live migration for virtual machines Copy linkLink copied to clipboard!
Cross-cluster live migration enables users to move a virtual machine (VM) workload from one OpenShift Container Platform cluster to another cluster without disruption. You enable cross-cluster live migration by setting cluster feature gates in OpenShift Virtualization and Migration Toolkit for Virtualization (MTV).
Prerequisites
- OpenShift Virtualization 4.20 or later must be installed.
- The OpenShift Container Platform and OpenShift Virtualization minor release versions must match. For example, if the OpenShift Container Platform version is 4.20.0, the OpenShift Virtualization must also be 4.20.0.
- Two OpenShift Container Platform clusters are required, and the migration network for both clusters must be connected to the same L2 network segment.
- You must have cluster administration privileges and appropriate RBAC privileges to manage VMs on both clusters.
Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
12.4.1. Setting a live migration feature gate for each cluster in OpenShift Virtualization Copy linkLink copied to clipboard!
To enable cross-cluster live migration, you must set a feature gate for each of the two clusters in OpenShift Virtualization.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - You must have cluster admin privileges.
-
The
virt-synchronization-controllerpods must be running.
Procedure
Set the feature gate by running the following command for each cluster:
oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op":"replace", "path": "/spec/featureGates/decentralizedLiveMigration", "value": true}]'$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op":"replace", "path": "/spec/featureGates/decentralizedLiveMigration", "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the feature gate enablement is successful for each cluster, run the following command in the OpenShift Virtualization namespace to locate the synchronization pods:
oc get -n {CNVNamespace} pod | grep virt-synchronization$ oc get -n {CNVNamespace} pod | grep virt-synchronizationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
virt-synchronization-controller-898789f8fc-nsbsm 1/1 Running 0 5d1h virt-synchronization-controller-898789f8fc-vmmfj 1/1 Running 0 5d1h
virt-synchronization-controller-898789f8fc-nsbsm 1/1 Running 0 5d1h virt-synchronization-controller-898789f8fc-vmmfj 1/1 Running 0 5d1hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.2. Setting a live migration feature gate in the Migration Toolkit for Virtualization (MTV) Copy linkLink copied to clipboard!
You enable the OpenShift Container Platform live migration feature gate in the Migration Toolkit for Virtualization (MTV) to allow virtual machines to migrate between clusters during cross-cluster live migration. This feature gate must be enabled in both clusters that participate in the migration.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - You must have cluster admin privileges.
-
The
virt-synchronization-controllerpods must be running.
Procedure
To enable the feature gate by modifying the CR, run the following command:
oc patch ForkliftController forklift-controller -n openshift-mtv --type json -p '[{"op": "add", "path": "/spec/feature_ocp_live_migration", "value": "true"}]'$ oc patch ForkliftController forklift-controller -n openshift-mtv --type json -p '[{"op": "add", "path": "/spec/feature_ocp_live_migration", "value": "true"}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the feature gate is enabled by checking the
ForkliftControllercustom resource (CR). Run the following command:oc get ForkliftController forklift-controller -n openshift-mtv -o yaml
$ oc get ForkliftController forklift-controller -n openshift-mtv -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the
feature_ocp_live_migrationkey value is set totrue, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5. Configuring a cross-cluster live migration network Copy linkLink copied to clipboard!
Cross-cluster live migration requires that the clusters be connected in the same network. Specifically, virt-handler pods must be able to communicate.
Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
12.5.1. Configuration for a bridge secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the Bridge CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
|
Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
| Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. |
|
|
|
Optional: Indicates whether the default vlan must be preserved on the |
|
|
|
Optional: Assign a VLAN trunk tag. The default value is |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
|
Optional: Enables duplicate address detection for the container side |
|
|
|
Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is |
The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.
To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command:
bridge vlan add vid VLAN_ID dev DEV
$ bridge vlan add vid VLAN_ID dev DEV
12.5.1.1. Bridge CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named bridge-net:
12.5.2. Configuring a dedicated secondary network for live migration Copy linkLink copied to clipboard!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You logged in to the cluster as a user with the
cluster-adminrole. - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinitionmanifest according to the following example:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
NetworkAttachmentDefinitionobject. - 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
Open the
HyperConvergedCR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the name of the
NetworkAttachmentDefinitionobject to thespec.liveMigrationConfigstanza of theHyperConvergedCR:Example
HyperConvergedmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the Multus
NetworkAttachmentDefinitionobject to be used for live migrations.
-
Save your changes and exit the editor. The
virt-handlerpods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.6. About Migration Toolkit for Virtualization (MTV) providers Copy linkLink copied to clipboard!
To migrate a virtual machine (VM) across OpenShift Container Platform clusters, you must configure an OpenShift Container Platform provider for each cluster that you are including in the migration. If MTV is already installed on a cluster, a local provider already exists.
Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Next steps
12.6.1. Configuring the root certificate authority for providers Copy linkLink copied to clipboard!
You must configure an OpenShift Container Platform provider for each cluster that you are including in the migration, and each provider requires a certificate authority (CA) for the cluster. It is important to configure the root CA for the entire cluster to avoid CA expiration, which causes the provider to fail.
Procedure
Run the following command against the cluster for which you are creating the provider:
oc get cm kube-root-ca.crt -o=jsonpath={.data.ca\\.crt}$ oc get cm kube-root-ca.crt -o=jsonpath={.data.ca\\.crt}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the printed certificate.
- In the Migration Toolkit for Virtualization (MTV) web console, create a provider and select OpenShift Virtualization.
Paste the certificate into the CA certificate field, as shown in the following example:
-----BEGIN CERTIFICATE----- <CA_certificate_content> -----END CERTIFICATE-----
-----BEGIN CERTIFICATE----- <CA_certificate_content> -----END CERTIFICATE-----Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.6.1.1. Creating the long-lived service account and token to use with MTV providers Copy linkLink copied to clipboard!
When you register an OpenShift Virtualization provider in the Migration Toolkit for Virtualization (MTV) web console, you must supply credentials that allow MTV to interact with the cluster. Creating a long-lived service account and cluster role binding gives MTV persistent permissions to read and create virtual machine resources during migration.
Procedure
Create the cluster role as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster role by running the following command:
oc create -f <filename>.yaml
$ oc create -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account by running the following command:
oc create serviceaccount <service_account_name> -n <service_account_namespace>
$ oc create serviceaccount <service_account_name> -n <service_account_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role binding that links the service account to the cluster role, by running the following command:
oc create clusterrolebinding <service_account_name> --clusterrole=<cluster_role_name> --serviceaccount=<service_account_namespace>:<service_account_name>
$ oc create clusterrolebinding <service_account_name> --clusterrole=<cluster_role_name> --serviceaccount=<service_account_namespace>:<service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret to hold the token by saving the following manifest as a YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the manifest by running the following command:
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the secret is populated, run the following command to get the service account bearer token:
TOKEN_BASE64=$(oc get secret "<name_of_secret>" -n "<namespace_bound_to_service_account>" -o jsonpath='{.data.token}')$ TOKEN_BASE64=$(oc get secret "<name_of_secret>" -n "<namespace_bound_to_service_account>" -o jsonpath='{.data.token}') TOKEN=$(echo "$TOKEN_BASE64" | base64 --decode) echo "$TOKEN"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the printed token.
- In the Migration Toolkit for Virtualization (MTV) web console, when you create a provider and select OpenShift Virtualization, paste the token into the Service account bearer token field.