6.14. Migrating Virtual Machines Between Hosts
Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service. The virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host. In the background, the virtual machine’s RAM is copied from the source host to the destination host. Storage and network connectivity are not altered.
A virtual machine that is using a vGPU cannot be migrated to a different host.
6.14.1. Live Migration Prerequisites
This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV
You can use live migration to seamlessly move virtual machines to support a number of common maintenance tasks. Your Red Hat Virtualization environment must be correctly configured to support live migration well in advance of using it.
At a minimum, the following prerequisites must be met to enable successful live migration of virtual machines:
- The source and destination hosts are members of the same cluster, ensuring CPU compatibility between them.
Live migrating virtual machines between different clusters is generally not recommended.
-
The source and destination hosts' status is
Up
. - The source and destination hosts have access to the same virtual networks and VLANs.
- The source and destination hosts have access to the data storage domain on which the virtual machine resides.
- The destination host has sufficient CPU capacity to support the virtual machine’s requirements.
- The destination host has sufficient unused RAM to support the virtual machine’s requirements.
-
The migrating virtual machine does not have the
cache!=none
custom property set.
Live migration is performed using the management network and involves transferring large amounts of data between hosts. Concurrent migrations have the potential to saturate the management network. For best performance, Red Hat recommends creating separate logical networks for management, storage, display, and virtual machine data to minimize the risk of network saturation.
Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration
Virtual machines with vNICs that are directly connected to a virtual function (VF) of an SR-IOV-enabled host NIC can be further configured to reduce network outage during live migration:
- Ensure that the destination host has an available VF.
- Set the Passthrough and Migratable options in the passthrough vNIC’s profile. See Enabling Passthrough on a vNIC Profile in the Administration Guide.
- Enable hotplugging for the virtual machine’s network interface.
- Ensure that the virtual machine has a backup VirtIO vNIC, in addition to the passthrough vNIC, to maintain the virtual machine’s network connection during migration.
-
Set the VirtIO vNIC’s
No Network Filter
option before configuring the bond. See Explanation of Settings in the VM Interface Profile Window in the Administration Guide. Add both vNICs as slaves under an
active-backup
bond on the virtual machine, with the passthrough vNIC as the primary interface.The bond and vNIC profiles can have one of the following configurations:
Recommended: The bond is not configured with
fail_over_mac=active
and the VF vNIC is the primary slave.Disable the VirtIO vNIC profile’s MAC-spoofing filter to ensure that traffic passing through the VirtIO vNIC is not dropped because it uses the VF vNIC MAC address. See Applying Network Filtering in the RHEL 7 Virtualization Deployment and Administration Guide.
The bond is configured with
fail_over_mac=active
.This failover policy ensures that the MAC address of the bond is always the MAC address of the active slave. During failover, the virtual machine’s MAC address changes, with a slight disruption in traffic.
6.14.2. Optimizing Live Migration
Live virtual machine migration can be a resource-intensive operation. The following two options can be set globally for every virtual machine in the environment, at the cluster level, or at the individual virtual machine level to optimize live migration.
The Auto Converge migrations option allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine.
The Enable migration compression option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern.
Both options are disabled globally by default.
Configuring Auto-convergence and Migration Compression for Virtual Machine Migration
Configure the optimization settings at the global level:
Enable auto-convergence at the global level:
# engine-config -s DefaultAutoConvergence=True
Enable migration compression at the global level:
# engine-config -s DefaultMigrationCompression=True
Restart the ovirt-engine service to apply the changes:
# systemctl restart ovirt-engine.service
Configure the optimization settings at the cluster level:
-
Click
and select a cluster. - Click .
- Click the Migration Policy tab.
- From the Auto Converge migrations list, select Inherit from global setting, Auto Converge, or Don’t Auto Converge.
- From the Enable migration compression list, select Inherit from global setting, Compress, or Don’t Compress.
- Click .
-
Click
Configure the optimization settings at the virtual machine level:
-
Click
and select a virtual machine. - Click .
- Click the Host tab.
- From the Auto Converge migrations list, select Inherit from cluster setting, Auto Converge, or Don’t Auto Converge.
- From the Enable migration compression list, select Inherit from cluster setting, Compress, or Don’t Compress.
- Click .
-
Click
6.14.3. Guest Agent Hooks
Hooks are scripts that trigger activity within a virtual machine when key events occur:
- Before migration
- After migration
- Before hibernation
- After hibernation
The hooks configuration base directory is /etc/ovirt-guest-agent/hooks.d
on Linux systems and C:\Program Files\Redhat\RHEV\Drivers\Agent
on Windows systems.
Each event has a corresponding subdirectory: before_migration
and after_migration
, before_hibernation
and after_hibernation
. All files or symbolic links in that directory will be executed.
The executing user on Linux systems is ovirtagent
. If the script needs root
permissions, the elevation must be executed by the creator of the hook script.
The executing user on Windows systems is the System Service
user.
6.14.4. Automatic Virtual Machine Migration
Red Hat Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster.
From version 4.3, all virtual machines defined with manual or automatic migration modes are migrated when the host is moved into maintenance mode. However, for high performance and/or pinned virtual machines, a Maintenance Host window is displayed, asking you to confirm the action because the performance on the target host may be less than the performance on the current host.
The Manager automatically initiates live migration of virtual machines in order to maintain load-balancing or power-saving levels in line with scheduling policy. Specify the scheduling policy that best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required.
If your virtual machines are configured for high performance, and/or if they have been pinned (by setting Passthrough Host CPU, CPU Pinning, or NUMA Pinning), the migration mode is set to Allow manual migration only. However, this can be changed to Allow Manual and Automatic mode if required. Special care should be taken when changing the default migration setting so that it does not result in a virtual machine migrating to a host that does not support high performance or pinning.
6.14.5. Preventing Automatic Migration of a Virtual Machine
Red Hat Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host.
The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite.
Preventing Automatic Migration of Virtual Machines
-
Click
and select a virtual machine. - Click .
- Click the Host tab.
In the Start Running On section, select Any Host in Cluster or Specific Host(s), which enables you to select multiple hosts.
WarningExplicitly assigning a virtual machine to a specific host and disabling migration are mutually exclusive with Red Hat Virtualization high availability.
ImportantIf the virtual machine has host devices directly attached to it, and a different host is specified, the host devices from the previous host will be automatically removed from the virtual machine.
- Select Allow manual migration only or Do not allow migration from the Migration Options drop-down list.
- Optionally, select the Use custom migration downtime check box and specify a value in milliseconds.
- Click .
6.14.6. Manually Migrating Virtual Machines
A running virtual machine can be live migrated to any host within its designated host cluster. Live migration of virtual machines does not cause any service interruption. Migrating virtual machines to a different host is especially useful if the load on a particular host is too high. For live migration prerequisites, see Section 6.14.1, “Live Migration Prerequisites”.
For high performance virtual machines and/or virtual machines defined with Pass-Through Host CPU, CPU Pinning, or NUMA Pinning, the default migration mode is Manual. Select Select Host Automatically so that the virtual machine migrates to the host that offers the best performance.
When you place a host into maintenance mode, the virtual machines running on that host are automatically migrated to other hosts in the same cluster. You do not need to manually migrate these virtual machines.
Live migrating virtual machines between different clusters is generally not recommended. The currently only supported use case is documented at https://access.redhat.com/articles/1390733.
Manually Migrating Virtual Machines
-
Click
and select a running virtual machine. - Click .
Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host, specifying the host using the drop-down list.
NoteWhen the Select Host Automatically option is selected, the system determines the host to which the virtual machine is migrated according to the load balancing and power management rules set up in the scheduling policy.
- Click .
During migration, progress is shown in the Migration progress bar. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to.
6.14.7. Setting Migration Priority
Red Hat Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. The load balancing process runs every minute. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster.
You can influence the ordering of the migration queue by setting the priority of each virtual machine; for example, setting mission critical virtual machines to migrate before others. Migrations will be ordered by priority; virtual machines with the highest priority will be migrated first.
Setting Migration Priority
-
Click
and select a virtual machine. - Click .
- Select the High Availability tab.
- Select Low, Medium, or High from the Priority drop-down list.
- Click .
6.14.8. Canceling Ongoing Virtual Machine Migrations
A virtual machine migration is taking longer than you expected. You’d like to be sure where all virtual machines are running before you make any changes to your environment.
Canceling Ongoing Virtual Machine Migrations
-
Select the migrating virtual machine. It is displayed in
with a status of Migrating from. - Click More Actions ( ), then click Cancel Migration.
The virtual machine status returns from Migrating from to Up.
6.14.9. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers
When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples:
Example 6.4. Notification in the Events Tab of the Administration Portal
Highly Available Virtual_Machine_Name failed. It will be restarted automatically.
Virtual_Machine_Name was restarted on Host Host_Name
Example 6.5. Notification in the Manager engine.log
This log can be found on the Red Hat Virtualization Manager at /var/log/ovirt-engine/engine.log:
Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name, VM Id:_Virtual_Machine_ID_Number_