12.4. Migrating a virtual machine by using the command line


If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host. You can perform a live migration or an offline migration. For differences between the two scenarios, see How migrating virtual machines works.

Prerequisites

Hypervisor
The source host and the destination host both use the KVM hypervisor.
Network connection
The source host and the destination host are able to reach each other over the network. Use the ping utility to verify this.
Open ports

Ensure the following ports are open on the destination host:

  • Port 22 is needed for connecting to the destination host by using SSH.
  • Port 16514 is needed for connecting to the destination host by using TLS.
  • Port 16509 is needed for connecting to the destination host by using TCP.
  • Ports 49152-49215 are needed by QEMU for transferring the memory and disk migration data.
Hosts
For the migration to be supportable by Red Hat, the source host and destination host must be using specific operating systems and machine types. To ensure this is the case, see Supported hosts for virtual machine migration.
CPU
The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
Storage

The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply:

Network bandwidth

When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.

To obtain the dirty page rate of your VM before you start the live migration, do the following:

  • Monitor the rate of dirty page generation of the VM for a short period of time.

    # virsh domdirtyrate-calc <example_VM> 30
  • After the monitoring finishes, obtain its results:

    # virsh domstats <example_VM> --dirtyrate
    Domain: 'example-VM'
      dirtyrate.calc_status=2
      dirtyrate.calc_start_time=200942
      dirtyrate.calc_period=30
      dirtyrate.megabytes_per_second=2

    In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.

    To ensure that the live migration finishes successfully, your network bandwidth should be significantly greater than the VM’s dirty page generation rate.

    注意

    The value of the calc_period option might differ based on the workload and dirty page rate. You can experiment with several calc_period values to determine the most suitable period that aligns with the dirty page rate in your environment.

Bridge tap network specifics
When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration.
Connection protocol

When performing a VM migration, the virsh client on the source host can use one of several protocols to connect to the libvirt daemon on the destination host. Examples in the following procedure use an SSH connection, but you can choose a different one.

  • If you want libvirt to use an SSH connection, ensure that the virtqemud socket is enabled and running on the destination host.

    # systemctl enable --now virtqemud.socket
  • If you want libvirt to use a TLS connection, ensure that the virtproxyd-tls socket is enabled and running on the destination host.

    # systemctl enable --now virtproxyd-tls.socket
  • If you want libvirt to use a TCP connection, ensure that the virtproxyd-tcp socket is enabled and running on the destination host.

    # systemctl enable --now virtproxyd-tcp.socket

Procedure

  • Offline migration

    • The following command migrates a shut-off example-VM VM from your local host to the system connection of the example-destination host by using an SSH tunnel.

      # virsh migrate --offline --persistent <example_VM> qemu+ssh://example-destination/system
  • Live migration

    1. The following command migrates the example-VM VM from your local host to the system connection of the example-destination host by using an SSH tunnel. The VM keeps running during the migration.

      # virsh migrate --live --persistent <example_VM> qemu+ssh://example-destination/system
    2. Wait for the migration to complete. The process might take some time depending on network bandwidth, system load, and the size of the VM. If the --verbose option is not used for virsh migrate, the CLI does not display any progress indicators except errors.

      When the migration is in progress, you can use the virsh domjobinfo utility to display the migration statistics.

  • Multi-FD live migration

    • You can use multiple parallel connections to the destination host during the live migration. This is also known as multiple file descriptors (multi-FD) migration. With multi-FD migration, you can speed up the migration by utilizing all of the available network bandwidth for the migration process.

      # virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh://<example-destination>/system

      This example uses 4 multi-FD channels to migrate the <example_VM> VM. It is a good practice to use one channel for each 10 Gbps of available network bandwidth. The default value is 2 channels.

  • Live migration with an increased downtime limit

    • To improve the reliability of a live migration, you can set the maxdowntime parameter, which specifies the maximum amount of time, in milliseconds, the VM can be paused during live migration. Setting a larger downtime can help to ensure the migration completes successfully.

      # virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>
  • Post-copy migration

    • If your VM has a large memory footprint, you can perform a post-copy migration, which transfers the source VM’s CPU state first and immediately starts the migrated VM on the destination host. The source VM’s memory pages are transferred after the migrated VM is already running on the destination host. Because of this, a post-copy migration can result in a smaller downtime of the migrated VM.

      However, the running VM on the destination host might try to access memory pages that have not yet been transferred, which causes a page fault. If too many page faults occur during the migration, the performance of the migrated VM can be severely degraded.

      Given the potential complications of a post-copy migration, it is usually better to use the following command that starts a standard live migration and switches to a post-copy migration if the live migration cannot be finished in a specified amount of time.

      # virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh://<example-destination>/system
  • Auto-converged live migration

    • If your VM is under a heavy memory workload, you can use the --auto-converge option. This option automatically slows down the execution speed of the VM’s CPU. As a consequence, this CPU throttling can help to slow down memory writes, which means the live migration might succeed even in VMs with a heavy memory workload.

      However, the CPU throttling does not help to resolve workloads where memory writes are not directly related to CPU execution speed, and it can negatively impact the performance of the VM during a live migration.

      # virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh://<example-destination>/system

Verification

  • For offline migration:

    • On the destination host, list the available VMs to verify that the VM was migrated successfully.

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      shut off
  • For live migration:

    • On the destination host, list the available VMs to verify the state of the destination VM:

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      running

      If the state of the VM is listed as running, it means that the migration is finished. However, if the live migration is still in progress, the state of the destination VM will be listed as paused.

  • For post-copy migration:

    1. On the source host, list the available VMs to verify the state of the source VM.

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      shut off
    2. On the destination host, list the available VMs to verify the state of the destination VM.

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      running

      If the state of the source VM is listed as shut off and the state of the destination VM is listed as running, it means that the migration is finished.

Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部