12.4. Migrating a virtual machine by using the command line
If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host. You can perform a live migration or an offline migration. For differences between the two scenarios, see How migrating virtual machines works.
Prerequisites
- Hypervisor
- The source host and the destination host both use the KVM hypervisor.
- Network connection
-
The source host and the destination host are able to reach each other over the network. Use the
pingutility to verify this. - Open ports
Ensure the following ports are open on the destination host:
- Port 22 is needed for connecting to the destination host by using SSH.
- Port 16514 is needed for connecting to the destination host by using TLS.
- Port 16509 is needed for connecting to the destination host by using TCP.
- Ports 49152-49215 are needed by QEMU for transferring the memory and disk migration data.
- Hosts
- For the migration to be supportable by Red Hat, the source host and destination host must be using specific operating systems and machine types. To ensure this is the case, see Supported hosts for virtual machine migration.
- CPU
- The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
- Storage
The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply:
- You are using storage area network (SAN) logical units (LUNs).
- You are using a Ceph storage clusters.
-
You have created a disk image with the same format and size as the source VM disk and you will use the
--copy-storage-allparameter when migrating the VM. - The disk image is located on a separate networked location. For instructions to set up such shared VM storage, see Sharing virtual machine disk images with other hosts.
- Network bandwidth
When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following:
Monitor the rate of dirty page generation of the VM for a short period of time.
# virsh domdirtyrate-calc <example_VM> 30After the monitoring finishes, obtain its results:
# virsh domstats <example_VM> --dirtyrate Domain: 'example-VM' dirtyrate.calc_status=2 dirtyrate.calc_start_time=200942 dirtyrate.calc_period=30 dirtyrate.megabytes_per_second=2In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, your network bandwidth should be significantly greater than the VM’s dirty page generation rate.
注意The value of the
calc_periodoption might differ based on the workload and dirty page rate. You can experiment with severalcalc_periodvalues to determine the most suitable period that aligns with the dirty page rate in your environment.
- Bridge tap network specifics
- When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration.
- Connection protocol
When performing a VM migration, the
virshclient on the source host can use one of several protocols to connect to the libvirt daemon on the destination host. Examples in the following procedure use an SSH connection, but you can choose a different one.If you want libvirt to use an SSH connection, ensure that the
virtqemudsocket is enabled and running on the destination host.# systemctl enable --now virtqemud.socketIf you want libvirt to use a TLS connection, ensure that the
virtproxyd-tlssocket is enabled and running on the destination host.# systemctl enable --now virtproxyd-tls.socketIf you want libvirt to use a TCP connection, ensure that the
virtproxyd-tcpsocket is enabled and running on the destination host.# systemctl enable --now virtproxyd-tcp.socket
Procedure
Offline migration
The following command migrates a shut-off
example-VMVM from your local host to the system connection of theexample-destinationhost by using an SSH tunnel.# virsh migrate --offline --persistent <example_VM> qemu+ssh://example-destination/system
Live migration
The following command migrates the
example-VMVM from your local host to the system connection of theexample-destinationhost by using an SSH tunnel. The VM keeps running during the migration.# virsh migrate --live --persistent <example_VM> qemu+ssh://example-destination/systemWait for the migration to complete. The process might take some time depending on network bandwidth, system load, and the size of the VM. If the
--verboseoption is not used forvirsh migrate, the CLI does not display any progress indicators except errors.When the migration is in progress, you can use the
virsh domjobinfoutility to display the migration statistics.
Multi-FD live migration
You can use multiple parallel connections to the destination host during the live migration. This is also known as multiple file descriptors (multi-FD) migration. With multi-FD migration, you can speed up the migration by utilizing all of the available network bandwidth for the migration process.
# virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh://<example-destination>/systemThis example uses 4 multi-FD channels to migrate the <example_VM> VM. It is a good practice to use one channel for each 10 Gbps of available network bandwidth. The default value is 2 channels.
Live migration with an increased downtime limit
To improve the reliability of a live migration, you can set the
maxdowntimeparameter, which specifies the maximum amount of time, in milliseconds, the VM can be paused during live migration. Setting a larger downtime can help to ensure the migration completes successfully.# virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>
Post-copy migration
If your VM has a large memory footprint, you can perform a post-copy migration, which transfers the source VM’s CPU state first and immediately starts the migrated VM on the destination host. The source VM’s memory pages are transferred after the migrated VM is already running on the destination host. Because of this, a post-copy migration can result in a smaller downtime of the migrated VM.
However, the running VM on the destination host might try to access memory pages that have not yet been transferred, which causes a page fault. If too many page faults occur during the migration, the performance of the migrated VM can be severely degraded.
Given the potential complications of a post-copy migration, it is usually better to use the following command that starts a standard live migration and switches to a post-copy migration if the live migration cannot be finished in a specified amount of time.
# virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh://<example-destination>/system
Auto-converged live migration
If your VM is under a heavy memory workload, you can use the
--auto-convergeoption. This option automatically slows down the execution speed of the VM’s CPU. As a consequence, this CPU throttling can help to slow down memory writes, which means the live migration might succeed even in VMs with a heavy memory workload.However, the CPU throttling does not help to resolve workloads where memory writes are not directly related to CPU execution speed, and it can negatively impact the performance of the VM during a live migration.
# virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh://<example-destination>/system
Verification
For offline migration:
On the destination host, list the available VMs to verify that the VM was migrated successfully.
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off
For live migration:
On the destination host, list the available VMs to verify the state of the destination VM:
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 runningIf the state of the VM is listed as
running, it means that the migration is finished. However, if the live migration is still in progress, the state of the destination VM will be listed aspaused.
For post-copy migration:
On the source host, list the available VMs to verify the state of the source VM.
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut offOn the destination host, list the available VMs to verify the state of the destination VM.
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 runningIf the state of the source VM is listed as
shut offand the state of the destination VM is listed asrunning, it means that the migration is finished.