Chapter 12. Live migration


12.1. About live migration

Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. Live migration enables smooth transitions during cluster upgrades or any time a node needs to be drained for maintenance or configuration changes.

By default, live migration traffic is encrypted using Transport Layer Security (TLS).

12.1.1. Live migration requirements

Live migration has the following requirements:

  • The cluster must have shared storage with ReadWriteMany (RWX) access mode.
  • The cluster must have sufficient RAM and network bandwidth.

    Note

    You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

    Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
    Copy to Clipboard Toggle word wrap

    The default number of migrations that can run in parallel in the cluster is 5.

  • If a VM uses a host model CPU, the nodes must support the CPU.
  • Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.

12.1.2. About live migration permissions

In OpenShift Virtualization 4.19 and later, live migration operations are restricted to users who are explicitly granted the kubevirt.io:migrate cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests, which are represented by VirtualMachineInstanceMigration (VMIM) custom resources.

Cluster administrators can bind the kubevirt.io:migrate role to trusted users or groups at either the namespace or cluster level.

Before OpenShift Virtualization 4.19, namespace administrators had live migration permissions by default. This behavior changed in version 4.19 to prevent unintended or malicious disruptions to infrastructure-critical migration operations.

As a cluster administrator, you can preserve the old behavior by creating a temporary cluster role before updating. After assigning the new role to users, delete the temporary role to enforce the more restrictive permissions. If you have already updated, you can still revert to the old behavior by aggregating the kubevirt.io:migrate role into the admin cluster role.

Before you update to OpenShift Virtualization 4.20, you can create a temporary cluster role to preserve the previous live migration permissions until you are ready for the more restrictive default permissions to take effect.

Prerequisites

  • The OpenShift CLI (oc) is installed.
  • You have cluster administrator permissions.

Procedure

  1. Before updating to OpenShift Virtualization 4.20, create a temporary ClusterRole object. For example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        rbac.authorization.k8s.io/aggregate-to-admin=true 
    1
    
      name: kubevirt.io:upgrademigrate
    rules:
    - apiGroups:
      - subresources.kubevirt.io
      resources:
      - virtualmachines/migrate
      verbs:
      - update
    - apiGroups:
      - kubevirt.io
      resources:
      - virtualmachineinstancemigrations
      verbs:
      - get
      - delete
      - create
      - update
      - patch
      - list
      - watch
      - deletecollection
    Copy to Clipboard Toggle word wrap
    1
    This cluster role is aggregated into the admin role before you update OpenShift Virtualization. The update process does not modify it, ensuring the previous behavior is maintained.
  2. Add the cluster role manifest to the cluster by running the following command:

    $ oc apply -f <cluster_role_file_name>.yaml
    Copy to Clipboard Toggle word wrap
  3. Update OpenShift Virtualization to version 4.20.
  4. Bind the kubevirt.io:migrate cluster role to trusted users or groups by running one of the following commands, replacing <namespace>, <first_user>, <second_user>, and <group_name> with your own values.

    • To bind the role at the namespace level, run the following command:

      $ oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
      Copy to Clipboard Toggle word wrap
    • To bind the role at the cluster level, run the following command:

      $ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
      Copy to Clipboard Toggle word wrap
  5. When you have bound the kubevirt.io:migrate role to all necessary users, delete the temporary ClusterRole object by running the following command:

    $ oc delete clusterrole kubevirt.io:upgrademigrate
    Copy to Clipboard Toggle word wrap

    After you delete the temporary cluster role, only users with the kubevirt.io:migrate role can create, delete, and update live migration requests.

12.1.4. Granting live migration permissions

Grant trusted users or groups the ability to create, delete, and update live migration instances.

Prerequisites

  • The OpenShift CLI (oc) is installed.
  • You have cluster administrator permissions.

Procedure

  • (Optional) To change the default behavior so that namespace administrators always have permission to create, delete, and update live migrations, aggregate the kubevirt.io:migrate role into the admin cluster role by running the following command:

    $ oc label --overwrite clusterrole kubevirt.io:migrate rbac.authorization.k8s.io/aggregate-to-admin=true
    Copy to Clipboard Toggle word wrap
  • Bind the kubevirt.io:migrate cluster role to trusted users or groups by running one of the following commands, replacing <namespace>, <first_user>, <second_user>, and <group_name> with your own values.

    • To bind the role at the namespace level, run the following command:

      $ oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
      Copy to Clipboard Toggle word wrap
    • To bind the role at the cluster level, run the following command:

      $ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
      Copy to Clipboard Toggle word wrap

12.1.5. VM migration tuning

You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long OpenShift Virtualization attempts to complete the migration before canceling the process. Configure these settings in the HyperConverged custom resource (CR).

If you are migrating multiple VMs per node at the same time, set a bandwidthPerMigration limit to prevent a large or busy VM from using a large portion of the node’s network bandwidth. By default, the bandwidthPerMigration value is 0, which means unlimited.

A large VM running a heavy workload (for example, database processing), with higher memory dirty rates, requires a higher bandwidth to complete the migration.

Note

Post copy mode, when enabled, triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. This can impact performance during the transfer.

Post copy mode should not be used for critical data, or with unstable networks.

12.1.6. Common live migration tasks

You can perform the following live migration tasks:

12.1.7. Additional resources

12.2. Configuring live migration

You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster.

You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs).

Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  • Edit the HyperConverged CR and add the necessary live migration parameters:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
    Copy to Clipboard Toggle word wrap

    Example configuration file

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        bandwidthPerMigration: 64Mi 
    1
    
        completionTimeoutPerGiB: 800 
    2
    
        parallelMigrationsPerCluster: 5 
    3
    
        parallelOutboundMigrationsPerNode: 2 
    4
    
        progressTimeout: 150 
    5
    
        allowPostCopy: false 
    6
    Copy to Clipboard Toggle word wrap

    1
    Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. Default: 0, which is unlimited.
    2
    The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration, the size of the migrating disks is included in the calculation.
    3
    Number of migrations running in parallel in the cluster. Default: 5.
    4
    Maximum number of outbound migrations per node. Default: 2.
    5
    The migration is canceled if memory copy fails to make progress in this time, in seconds. Default: 150.
    6
    If a VM is running a heavy workload and the memory dirty rate is too high, this can prevent the migration from one node to another from converging. To prevent this, you can enable post copy mode. By default, allowPostCopy is set to false.
Note

You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150.

When migrating a VM running a heavy workload (for example, database processing) with higher memory dirty rates, you need a higher bandwidth to complete the migration.

If the dirty rate is too high, the migration from one node to another does not converge. To prevent this, enable post copy mode.

Post copy mode triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime.

Configure live migration for heavy workloads by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the HyperConverged CR and add the necessary parameters for migrating heavy workloads:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
    Copy to Clipboard Toggle word wrap

    Example configuration file

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        bandwidthPerMigration: 0Mi 
    1
    
        completionTimeoutPerGiB: 150 
    2
    
        parallelMigrationsPerCluster: 5 
    3
    
        parallelOutboundMigrationsPerNode: 1 
    4
    
        progressTimeout: 150 
    5
    
        allowPostCopy: true 
    6
    Copy to Clipboard Toggle word wrap

    1
    Bandwidth limit of each migration, where the value is the quantity of bytes per second. The default is 0, which is unlimited.
    2
    The migration is canceled if it is not completed in this time, and triggers post copy mode, when post copy is enabled. This value is measured in seconds per GiB of memory. You can lower completionTimeoutPerGiB to trigger post copy mode earlier in the migration process, or raise the completionTimeoutPerGiB to trigger post copy mode later in the migration process.
    3
    Number of migrations running in parallel in the cluster. The default is 5. Keeping the parallelMigrationsPerCluster setting low is better when migrating heavy workloads.
    4
    Maximum number of outbound migrations per node. Configure a single VM per node for heavy workloads.
    5
    The migration is canceled if memory copy fails to make progress in this time. This value is measured in seconds. Increase this parameter for large memory sizes running heavy workloads.
    6
    Use post copy mode when memory dirty rates are high to ensure the migration converges. Set allowPostCopy to true to enable post copy mode.
  2. Optional: If your main network is too busy for the migration, configure a secondary, dedicated migration network.
Note

Post copy mode can impact performance during the transfer, and should not be used for critical data, or with unstable networks.

12.2.4. Live migration policies

You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels.

Tip

You can create live migration policies by using the OpenShift Container Platform web console.

You can create a live migration policy by using the command line. KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels:

  • VM labels such as size, os, or gpu
  • Project labels such as priority, bandwidth, or hpc-workload

For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy.

Note

If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence.

If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the VM object to which you want to apply a live migration policy, and add the corresponding VM labels.

    1. Open the YAML configuration of the resource:

      $ oc edit vm <vm_name>
      Copy to Clipboard Toggle word wrap
    2. Adjust the required label values in the .spec.template.metadata.labels section of the configuration. For example, to mark the VM as a production VM for the purposes of migration policies, add the kubevirt.io/environment: production line:

      apiVersion: migrations.kubevirt.io/v1alpha1
      kind: VirtualMachine
      metadata:
        name: <vm_name>
        namespace: default
        labels:
          app: my-app
          environment: production
      spec:
        template:
          metadata:
            labels:
              kubevirt.io/domain: <vm_name>
              kubevirt.io/size: large
              kubevirt.io/environment: production
      # ...
      Copy to Clipboard Toggle word wrap
    3. Save and exit the configuration.
  2. Configure a MigrationPolicy object with the corresponding labels. The following example configures a policy that applies to all VMs that are labeled as production:

    apiVersion: migrations.kubevirt.io/v1alpha1
    kind: MigrationPolicy
    metadata:
      name: <migration_policy>
    spec:
      selectors:
        namespaceSelector: 
    1
    
          hpc-workloads: "True"
          xyz-workloads-type: ""
        virtualMachineInstanceSelector: 
    2
    
          kubevirt.io/environment: "production"
    Copy to Clipboard Toggle word wrap
    1
    Specify project labels.
    2
    Specify VM labels.
  3. Create the migration policy by running the following command:

    $ oc create -f <migration_policy>.yaml
    Copy to Clipboard Toggle word wrap

12.2.5. Migrating a VM to a specific node

You can migrate a running virtual machine (VM) to a specific subset of nodes by using the addedNodeSelector field on the VirtualMachineInstanceMigration object. This field lets you apply additional node selection rules for a one-time migration attempt, without affecting the VM configuration or future migrations.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • The VM you want to migrate is running.
  • You have identified the labels of the target nodes. Multiple labels can be specified and are combined with logical AND.
  • The oc CLI tool is installed.

Procedure

  1. Create a migration manifest YAML file. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: migration-job
    spec:
      vmiName: vmi-fedora
      addedNodeSelector:
        accelerator: gpu-enabled23
        kubernetes.io/hostname: "ip-172-28-114-199.example"
    Copy to Clipboard Toggle word wrap

    where:

    vmiName
    Specifies the name of the running VM (for example, vmi-fedora).
    addedNodeSelector
    Specifies additional constraints for selecting the target node.
  2. Apply the manifest to the cluster by running the following command:

    $ oc apply -f <file_name>.yaml
    Copy to Clipboard Toggle word wrap

    If no nodes satisfy the constraints, the migration is declared a failure after a timeout. The VM remains unaffected.

12.3. Initiating and canceling live migration

You can initiate the live migration of a virtual machine (VM) to another node by using the OpenShift Container Platform web console or the command line.

You can cancel a live migration by using the web console or the command line. The VM remains on its original node.

Tip

You can also initiate and cancel live migration by using the virtctl migrate <vm_name> and virtctl migrate-cancel <vm_name> commands.

12.3.1. Initiating live migration

You can live migrate a running virtual machine (VM) to a different node in the cluster by using the OpenShift Container Platform web console.

Note

The Migrate action is visible to all users but only cluster administrators can initiate a live migration.

Prerequisites

  • You have the kubevirt.io:migrate RBAC role or you are a cluster administrator.
  • The VM is migratable.
  • If the VM is configured with a host model CPU, the cluster has an available node that supports the CPU model.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Take either of the following steps:

    • Click the Options menu kebab beside the VM you want to migrate, hover over the Migrate option, and select Compute.
    • Open the VM details page of the VM you want to migrate, click the Actions menu, hover over the Migrate option, and select Compute.
  3. In the Migrate Virtual Machine to a different Node dialog box, select either Automatically Selected Node or Specific Node.

    1. If you selected the Specific Node option, choose a node from the list.
  4. Click Migrate Virtual Machine.

You can initiate the live migration of a running virtual machine (VM) by using the command line to create a VirtualMachineInstanceMigration object for the VM.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have the kubevirt.io:migrate RBAC role or you are a cluster administrator.

Procedure

  1. Create a VirtualMachineInstanceMigration manifest for the VM that you want to migrate:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: <migration_name>
    spec:
      vmiName: <vm_name>
    Copy to Clipboard Toggle word wrap
  2. Create the object by running the following command:

    $ oc create -f <migration_name>.yaml
    Copy to Clipboard Toggle word wrap

    The VirtualMachineInstanceMigration object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted.

Verification

  • Obtain the VM status by running the following command:

    $ oc describe vmi <vm_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    # ...
    Status:
      Conditions:
        Last Probe Time:       <nil>
        Last Transition Time:  <nil>
        Status:                True
        Type:                  LiveMigratable
      Migration Method:  LiveMigration
      Migration State:
        Completed:                    true
        End Timestamp:                2018-12-24T06:19:42Z
        Migration UID:                d78c8962-0743-11e9-a540-fa163e0c69f1
        Source Node:                  node2.example.com
        Start Timestamp:              2018-12-24T06:19:35Z
        Target Node:                  node1.example.com
        Target Node Address:          10.9.0.18:43891
        Target Node Domain Detected:  true
    Copy to Clipboard Toggle word wrap

12.3.2. Canceling live migration

You can cancel the live migration of a virtual machine (VM) by using the OpenShift Container Platform web console.

Prerequisites

  • You have the kubevirt.io:migrate RBAC role or you are a cluster administrator.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select Cancel Migration on the Options menu kebab beside a VM.

Cancel the live migration of a virtual machine by deleting the VirtualMachineInstanceMigration object associated with the migration.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have the kubevirt.io:migrate RBAC role or you are a cluster administrator.

Procedure

  • Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example:

    $ oc delete vmim migration-job
    Copy to Clipboard Toggle word wrap

Cross-cluster live migration enables users to move a virtual machine (VM) workload from one OpenShift Container Platform cluster to another cluster without disruption. You enable cross-cluster live migration by setting cluster feature gates in OpenShift Virtualization and Migration Toolkit for Virtualization (MTV).

Prerequisites

  • OpenShift Virtualization 4.20 or later must be installed.
  • The OpenShift Container Platform and OpenShift Virtualization minor release versions must match. For example, if the OpenShift Container Platform version is 4.20.0, the OpenShift Virtualization must also be 4.20.0.
  • Two OpenShift Container Platform clusters are required, and the migration network for both clusters must be connected to the same L2 network segment.
  • You must have cluster administration privileges and appropriate RBAC privileges to manage VMs on both clusters.
Important

Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

To enable cross-cluster live migration, you must set a feature gate for each of the two clusters in OpenShift Virtualization.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You must have cluster admin privileges.
  • The virt-synchronization-controller pods must be running.

Procedure

  • Set the feature gate by running the following command for each cluster:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op":"replace", "path": "/spec/featureGates/decentralizedLiveMigration", "value": true}]'
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the feature gate enablement is successful for each cluster, run the following command in the OpenShift Virtualization namespace to locate the synchronization pods:

    $ oc get -n {CNVNamespace} pod | grep virt-synchronization
    Copy to Clipboard Toggle word wrap

    Example output:

    virt-synchronization-controller-898789f8fc-nsbsm      1/1     Running   0               5d1h
    virt-synchronization-controller-898789f8fc-vmmfj      1/1     Running   0               5d1h
    Copy to Clipboard Toggle word wrap

You enable the OpenShift Container Platform live migration feature gate in the Migration Toolkit for Virtualization (MTV) to allow virtual machines to migrate between clusters during cross-cluster live migration. This feature gate must be enabled in both clusters that participate in the migration.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You must have cluster admin privileges.
  • The virt-synchronization-controller pods must be running.

Procedure

  • To enable the feature gate by modifying the CR, run the following command:

    $ oc patch ForkliftController forklift-controller -n openshift-mtv --type json -p '[{"op": "add", "path": "/spec/feature_ocp_live_migration", "value": "true"}]'
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the feature gate is enabled by checking the ForkliftController custom resource (CR). Run the following command:

    $ oc get ForkliftController forklift-controller -n openshift-mtv -o yaml
    Copy to Clipboard Toggle word wrap

    Confirm that the feature_ocp_live_migration key value is set to true, as shown in the following example:

    apiVersion: forklift.konveyor.io/v1beta1
    kind: ForkliftController
    metadata:
      name: forklift-controller
      namespace: openshift-mtv
    spec:
      feature_ocp_live_migration: "true"
      feature_ui_plugin: "true"
      feature_validation: "true"
      feature_volume_populator: "true"
    Copy to Clipboard Toggle word wrap

Cross-cluster live migration requires that the clusters be connected in the same network. Specifically, virt-handler pods must be able to communicate.

Important

Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The following object describes the configuration parameters for the Bridge CNI plugin:

Expand
Table 12.1. Bridge CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: bridge.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

bridge

string

Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0.

ipMasq

boolean

Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge’s IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false.

isGateway

boolean

Optional: Set to true to assign an IP address to the bridge. The default value is false.

isDefaultGateway

boolean

Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false. If isDefaultGateway is set to true, then isGateway is also set to true automatically.

forceAddress

boolean

Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false, if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false.

hairpinMode

boolean

Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay. The default value is false.

promiscMode

boolean

Optional: Set to true to enable promiscuous mode on the bridge. The default value is false.

vlan

string

Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned.

preserveDefaultVlan

string

Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true.

vlanTrunk

list

Optional: Assign a VLAN trunk tag. The default value is none.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

enabledad

boolean

Optional: Enables duplicate address detection for the container side veth. The default value is false.

macspoofchk

boolean

Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false.

Note

The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.

Note

To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command:

$  bridge vlan add vid VLAN_ID dev DEV
Copy to Clipboard Toggle word wrap

12.5.1.1. Bridge CNI plugin configuration example

The following example configures a secondary network named bridge-net:

{
  "cniVersion": "0.3.1",
  "name": "bridge-net",
  "type": "bridge",
  "isGateway": true,
  "vlan": 2,
  "ipam": {
    "type": "dhcp"
    }
}
Copy to Clipboard Toggle word wrap

To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You logged in to the cluster as a user with the cluster-admin role.
  • Each node has at least two Network Interface Cards (NICs).
  • The NICs for live migration are connected to the same VLAN.

Procedure

  1. Create a NetworkAttachmentDefinition manifest according to the following example:

    Example configuration file

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: my-secondary-network 
    1
    
      namespace: openshift-cnv
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "migration-bridge",
        "type": "macvlan",
        "master": "eth1", 
    2
    
        "mode": "bridge",
        "ipam": {
          "type": "whereabouts", 
    3
    
          "range": "10.200.5.0/24" 
    4
    
        }
      }'
    Copy to Clipboard Toggle word wrap

    1
    Specify the name of the NetworkAttachmentDefinition object.
    2
    Specify the name of the NIC to be used for live migration.
    3
    Specify the name of the CNI plugin that provides the network for the NAD.
    4
    Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
  2. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
    Copy to Clipboard Toggle word wrap
  3. Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR:

    Example HyperConverged manifest

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        completionTimeoutPerGiB: 800
        network: <network> 
    1
    
        parallelMigrationsPerCluster: 5
        parallelOutboundMigrationsPerNode: 2
        progressTimeout: 150
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations.
  4. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network.

Verification

  • When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.

    $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
    Copy to Clipboard Toggle word wrap

To migrate a virtual machine (VM) across OpenShift Container Platform clusters, you must configure an OpenShift Container Platform provider for each cluster that you are including in the migration. If MTV is already installed on a cluster, a local provider already exists.

Important

Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You must configure an OpenShift Container Platform provider for each cluster that you are including in the migration, and each provider requires a certificate authority (CA) for the cluster. It is important to configure the root CA for the entire cluster to avoid CA expiration, which causes the provider to fail.

Procedure

  1. Run the following command against the cluster for which you are creating the provider:

    $ oc get cm kube-root-ca.crt -o=jsonpath={.data.ca\\.crt}
    Copy to Clipboard Toggle word wrap
  2. Copy the printed certificate.
  3. In the Migration Toolkit for Virtualization (MTV) web console, create a provider and select OpenShift Virtualization.
  4. Paste the certificate into the CA certificate field, as shown in the following example:

    -----BEGIN CERTIFICATE-----
    <CA_certificate_content>
    -----END CERTIFICATE-----
    Copy to Clipboard Toggle word wrap

When you register an OpenShift Virtualization provider in the Migration Toolkit for Virtualization (MTV) web console, you must supply credentials that allow MTV to interact with the cluster. Creating a long-lived service account and cluster role binding gives MTV persistent permissions to read and create virtual machine resources during migration.

Procedure

  1. Create the cluster role as shown in the following example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: live-migration-role
    rules:
      - apiGroups:
          - forklift.konveyor.io
        resources:
          - '*'
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - secrets
          - namespaces
          - configmaps
          - persistentvolumes
          - persistentvolumeclaims
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - k8s.cni.cncf.io
        resources:
          - network-attachment-definitions
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - storage.k8s.io
        resources:
          - storageclasses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - kubevirt.io
        resources:
          - virtualmachines
          - virtualmachines/finalizers
          - virtualmachineinstancemigrations
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - kubevirt.io
        resources:
          - kubevirts
          - virtualmachineinstances
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - cdi.kubevirt.io
        resources:
          - datavolumes
          - datavolumes/finalizers
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - apps
        resources:
          - deployments
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - instancetype.kubevirt.io
        resources:
          - virtualmachineclusterpreferences
          - virtualmachineclusterinstancetypes
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - instancetype.kubevirt.io
        resources:
          - virtualmachinepreferences
          - virtualmachineinstancetypes
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
    Copy to Clipboard Toggle word wrap
  2. Create the cluster role by running the following command:

    $ oc create -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  3. Create a service account by running the following command:

    $ oc create serviceaccount <service_account_name> -n <service_account_namespace>
    Copy to Clipboard Toggle word wrap
  4. Create a cluster role binding that links the service account to the cluster role, by running the following command:

    $ oc create clusterrolebinding <service_account_name> --clusterrole=<cluster_role_name> --serviceaccount=<service_account_namespace>:<service_account_name>
    Copy to Clipboard Toggle word wrap
  5. Create a secret to hold the token by saving the following manifest as a YAML file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <name_of_secret>
      namespace: <namespace_for_service_account>
      annotations:
        kubernetes.io/service-account.name: <service_account_name>
    type: kubernetes.io/service-account-token
    Copy to Clipboard Toggle word wrap
  6. Apply the manifest by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  7. After the secret is populated, run the following command to get the service account bearer token:

    $ TOKEN_BASE64=$(oc get secret "<name_of_secret>" -n "<namespace_bound_to_service_account>" -o jsonpath='{.data.token}')
      TOKEN=$(echo "$TOKEN_BASE64" | base64 --decode)
      echo "$TOKEN"
    Copy to Clipboard Toggle word wrap
  8. Copy the printed token.
  9. In the Migration Toolkit for Virtualization (MTV) web console, when you create a provider and select OpenShift Virtualization, paste the token into the Service account bearer token field.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat