Questo contenuto non è disponibile nella lingua selezionata.

Chapter 12. Migrating virtual machines


If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host.

12.1. How migrating virtual machines works

You can migrate a running virtual machine (VM) without interrupting the workload, with only a small downtime, by using a live migration. By default, the migrated VM is transient on the destination host, and remains defined also on the source host. The essential part of a live migration is transferring the state of the VM’s memory and of any attached virtualized devices to a destination host. For the VM to remain functional on the destination host, the VM’s disk images must remain available to it.

To migrate a shut-off VM, you must use an offline migration, which copies the VM’s configuration to the destination host. For details, see the following table.

Table 12.1. VM migration types
Migration typeDescriptionUse caseStorage requirements

Live migration

The VM continues to run on the source host machine while KVM is transferring the VM’s memory pages to the destination host. When the migration is nearly complete, KVM very briefly suspends the VM, and resumes it on the destination host.

Useful for VMs that require constant uptime. However, for VMs that modify memory pages faster than KVM can transfer them, such as VMs under heavy I/O load, the live migration might fail. (1)

The VM’s disk images must be accessible both to the source host and the destination host during the migration. (2)

Offline migration

Moves the VM’s configuration to the destination host

Recommended for shut-off VMs and in situations when shutting down the VM does not disrupt your workloads.

The VM’s disk images do not have to be accessible to the source or destination host during migration, and can be copied or moved manually to the destination host instead.

(1) For possible solutions, see: Additional virsh migrate options for live migrations

(2) To achieve this, use one of the following:

  • Storage located on a shared network
  • The --copy-storage-all parameter for the virsh migrate command, which copies disk image contents from the source to the destination over the network.
  • Storage area network (SAN) logical units (LUNs).
  • Ceph storage clusters
Note

For easier management of large-scale migrations, explore other Red Hat products, such as:

12.2. Benefits of migrating virtual machines

Migrating virtual machines (VMs) can be useful for:

Load balancing
VMs can be moved to host machines with lower usage if their host becomes overloaded, or if another host is under-utilized.
Hardware independence
When you need to upgrade, add, or remove hardware devices on the host machine, you can safely relocate VMs to other hosts. This means that VMs do not experience any downtime for hardware improvements.
Energy saving
VMs can be redistributed to other hosts, and the unloaded host systems can thus be powered off to save energy and cut costs during low usage periods.
Geographic migration
VMs can be moved to another physical location for lower latency or when required for other reasons.

12.3. Limitations for migrating virtual machines

Before migrating virtual machines (VMs) in RHEL 9, ensure you are aware of the migration’s limitations.

  • VMs that use certain features and configurations will not work correctly if migrated, or the migration will fail. Such features include:

  • A migration between hosts that use Non-Uniform Memory Access (NUMA) pinning works only if the hosts have similar topology. However, the performance on running workloads might be negatively affected by the migration.
  • Both the source and destination hosts use specific RHEL versions that are supported for VM migration, see Supported hosts for virtual machine migration
  • The physical CPUs, both on the source VM and the destination VM, must be identical, otherwise the migration might fail. Any differences between the VMs in the following CPU related areas can cause problems with the migration:

    • CPU model

    • Physical machine firmware versions and settings

12.4. Migrating a virtual machine by using the command-line interface

If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host. You can perform a live migration or an offline migration. For differences between the two scenarios, see How migrating virtual machines works.

Prerequisites

  • Hypervisor: The source host and the destination host both use the KVM hypervisor.
  • Network connection: The source host and the destination host are able to reach each other over the network. Use the ping utility to verify this.
  • Open ports: Ensure the following ports are open on the destination host.

    • Port 22 is needed for connecting to the destination host by using SSH.
    • Port 16509 is needed for connecting to the destination host by using TLS.
    • Port 16514 is needed for connecting to the destination host by using TCP.
    • Ports 49152-49215 are needed by QEMU for transferring the memory and disk migration data.
  • Hosts: For the migration to be supportable by Red Hat, the source host and destination host must be using specific operating systems and machine types. To ensure this is the case, see Supported hosts for virtual machine migration.
  • CPU: The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
  • Storage: The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply:

  • Network bandwidth: When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.

    To obtain the dirty page rate of your VM before you start the live migration, do the following:

    • Monitor the rate of dirty page generation of the VM for a short period of time.

      # virsh domdirtyrate-calc <example_VM> 30
    • After the monitoring finishes, obtain its results:

      # virsh domstats <example_VM> --dirtyrate
      Domain: 'example-VM'
        dirtyrate.calc_status=2
        dirtyrate.calc_start_time=200942
        dirtyrate.calc_period=30
        dirtyrate.megabytes_per_second=2

      In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.

      To ensure that the live migration finishes successfully, Red Hat recommends that your network bandwidth is significantly greater than the VM’s dirty page generation rate.

      Note

      The value of the calc_period option might differ based on the workload and dirty page rate. You can experiment with several calc_period values to determine the most suitable period that aligns with the dirty page rate in your environment.

  • Bridge tap network specifics: When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration.
  • Connection protocol: When performing a VM migration, the virsh client on the source host can use one of several protocols to connect to the libvirt daemon on the destination host. Examples in the following procedure use an SSH connection, but you can choose a different one.

    • If you want libvirt to use an SSH connection, ensure that the virtqemud socket is enabled and running on the destination host.

      # systemctl enable --now virtqemud.socket
    • If you want libvirt to use a TLS connection, ensure that the virtproxyd-tls socket is enabled and running on the destination host.

      # systemctl enable --now virtproxyd-tls.socket
    • If you want libvirt to use a TCP connection, ensure that the virtproxyd-tcp socket is enabled and running on the destination host.

      # systemctl enable --now virtproxyd-tcp.socket

Procedure

To migrate a VM from one host to another, use the virsh migrate command.

Offline migration

  • The following command migrates a shut-off example-VM VM from your local host to the system connection of the example-destination host by using an SSH tunnel.

    # virsh migrate --offline --persistent <example_VM> qemu+ssh://example-destination/system

Live migration

  1. The following command migrates the example-VM VM from your local host to the system connection of the example-destination host by using an SSH tunnel. The VM keeps running during the migration.

    # virsh migrate --live --persistent <example_VM> qemu+ssh://example-destination/system
  2. Wait for the migration to complete. The process might take some time depending on network bandwidth, system load, and the size of the VM. If the --verbose option is not used for virsh migrate, the CLI does not display any progress indicators except errors.

    When the migration is in progress, you can use the virsh domjobinfo utility to display the migration statistics.

Multi-FD live migration

  • You can use multiple parallel connections to the destination host during the live migration. This is also known as multiple file descriptors (multi-FD) migration. With multi-FD migration, you can speed up the migration by utilizing all of the available network bandwidth for the migration process.

    # virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh://<example-destination>/system

    This example uses 4 multi-FD channels to migrate the <example_VM> VM. It is recommended to use one channel for each 10 Gbps of available network bandwidth. The default value is 2 channels.

Live migration with an increased downtime limit

  • To improve the reliability of a live migration, you can set the maxdowntime parameter, which specifies the maximum amount of time, in milliseconds, the VM can be paused during live migration. Setting a larger downtime can help to ensure the migration completes successfully.

    # virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>

Post-copy migration

  • If your VM has a large memory footprint, you can perform a post-copy migration, which transfers the source VM’s CPU state first and immediately starts the migrated VM on the destination host. The source VM’s memory pages are transferred after the migrated VM is already running on the destination host. Because of this, a post-copy migration can result in a smaller downtime of the migrated VM.

    However, the running VM on the destination host might try to access memory pages that have not yet been transferred, which causes a page fault. If too many page faults occur during the migration, the performance of the migrated VM can be severely degraded.

    Given the potential complications of a post-copy migration, it is recommended to use the following command that starts a standard live migration and switches to a post-copy migration if the live migration cannot be finished in a specified amount of time.

    # virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh://<example-destination>/system

Auto-converged live migration

  • If your VM is under a heavy memory workload, you can use the --auto-converge option. This option automatically slows down the execution speed of the VM’s CPU. As a consequence, this CPU throttling can help to slow down memory writes, which means the live migration might succeed even in VMs with a heavy memory workload.

    However, the CPU throttling does not help to resolve workloads where memory writes are not directly related to CPU execution speed, and it can negatively impact the performance of the VM during a live migration.

    # virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh://<example-destination>/system

Verification

  • For offline migration:

    • On the destination host, list the available VMs to verify that the VM was migrated successfully.

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      shut off
  • For live migration:

    • On the destination host, list the available VMs to verify the state of the destination VM:

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      running

      If the state of the VM is listed as running, it means that the migration is finished. However, if the live migration is still in progress, the state of the destination VM will be listed as paused.

  • For post-copy migration:

    1. On the source host, list the available VMs to verify the state of the source VM.

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      shut off
    2. On the destination host, list the available VMs to verify the state of the destination VM.

      # virsh list --all
      Id      Name             State
      ----------------------------------
      10    example-VM-1      running

      If the state of the source VM is listed as shut off and the state of the destination VM is listed as running, it means that the migration is finished.

Additional resources

  • virsh migrate --help command
  • virsh (1) man page on your system

12.5. Live migrating a virtual machine by using the web console

If you want to migrate a virtual machine (VM) that is performing tasks which require it to be constantly running, you can migrate that VM to another KVM host without shutting it down. This is also known as live migration. The following instructions explain how to do so by using the web console.

Prerequisites

  • You have installed the RHEL 9 web console.

    For instructions, see Installing and enabling the web console.

  • The web console VM plugin is installed on your system.
  • Hypervisor: The source host and the destination host both use the KVM hypervisor.
  • Hosts: The source and destination hosts are running.
  • Open ports: Ensure the following ports are open on the destination host.

    • Port 22 is needed for connecting to the destination host by using SSH.
    • Port 16509 is needed for connecting to the destination host by using TLS.
    • Port 16514 is needed for connecting to the destination host by using TCP.
    • Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration data.
  • CPU: The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
  • Storage: The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply:

  • Network bandwidth: When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.

    To obtain the dirty page rate of your VM before you start the live migration, do the following in your command-line interface:

    1. Monitor the rate of dirty page generation of the VM for a short period of time.

      # virsh domdirtyrate-calc vm-name 30
    2. After the monitoring finishes, obtain its results:

      # virsh domstats vm-name --dirtyrate
      Domain: 'vm-name'
        dirtyrate.calc_status=2
        dirtyrate.calc_start_time=200942
        dirtyrate.calc_period=30
        dirtyrate.megabytes_per_second=2

      In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.

      To ensure that the live migration finishes successfully, Red Hat recommends that your network bandwidth is significantly greater than the VM’s dirty page generation rate.

      Note

      The value of the calc_period option might differ based on the workload and dirty page rate. You can experiment with several calc_period values to determine the most suitable period that aligns with the dirty page rate in your environment.

  • Bridge tap network specifics: When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration.

Procedure

  1. In the Virtual Machines interface of the web console, click the Menu button of the VM that you want to migrate.

    A drop down menu appears with controls for various VM operations.

    The virtual machines main page displaying the available options when the VM is running.
  2. Click Migrate

    The Migrate VM to another host dialog appears.

    The Migrate VM to another host dialog box with fields to enter the URI of the destination host and set the migration duration.
  3. Enter the URI of the destination host.
  4. Configure the duration of the migration:

    • Permanent - Do not check the box if you want to migrate the VM permanently. Permanent migration completely removes the VM configuration from the source host.
    • Temporary - Temporary migration migrates a copy of the VM to the destination host. This copy is deleted from the destination host when the VM is shut down. The original VM remains on the source host.
  5. Click Migrate

    Your VM is migrated to the destination host.

Verification

To verify whether the VM has been successfully migrated and is working correctly:

  • Confirm whether the VM appears in the list of VMs available on the destination host.
  • Start the migrated VM and observe if it boots up.

12.6. Live migrating a virtual machine with an attached Mellanox virtual function

As a Technology Preview, you can live migrate a virtual machine (VM) with an attached virtual function (VF) of a Mellanox networking device. Currently, this is only possible when using a Mellanox CX-7 networking device. The VF on the Mellanox CX-7 networking device uses a new mlx5_vfio_pci driver, which adds functionality that is necessary for the live migration, and libvirt binds the new driver to the VF automatically.

Limitations

Currently, some virtualization features cannot be used when live migrating a VM with an attached Mellanox virtual function:

  • Calculating dirty memory page rate generation of the VM.

    Currently, when migrating a VM with an attached Mellanox VF, live migration data and statistics provided by virsh domjobinfo and virsh domdirtyrate-calc commands are inaccurate, because the calculations only count guest RAM without including the impact of the attached VF.

  • Using a post-copy live migration.
  • Using a virtual I/O Memory Management Unit (vIOMMU) device in the VM.
Important

This feature is included in RHEL 9 only as a Technology Preview, which means it is not supported.

Prerequisites

  • You have a Mellanox CX-7 networking device with a firmware version that is equal to or greater than 28.36.1010.

    Refer to Mellanox documentation for details about firmware versions.

  • The mstflint package is installed on both the source and destination host:

    # dnf install mstflint
  • The Mellanox CX-7 networking device has VF_MIGRATION_MODE set to MIGRATION_ENABLED:

    # mstconfig -d <device_pci_address> query | grep -i VF_migration
    
    VF_MIGRATION_MODE                           MIGRATION_ENABLED(2)
    • You can set VF_MIGRATION_MODE to MIGRATION_ENABLED by using the following command:

      # mstconfig -d <device_pci_address> set VF_MIGRATION_MODE=2
  • The openvswitch package is installed on both the source and destination host:

    # dnf install openvswitch
  • All of the general SR-IOV devices prerequisites. For details, see Attaching SR-IOV networking devices to virtual machines
  • All of the general VM migration prerequisites. For details, see Migrating a virtual machine by using the command-line interface

Procedure

  1. On the source host, set the Mellanox networking device to the switchdev mode.

    # devlink dev eswitch set pci/<device_pci_address> mode switchdev
  2. On the source host, create a virtual function on the Mellanox device.

    # echo 1 > /sys/bus/pci/devices/0000\:e1\:00.0/sriov_numvfs

    The /0000\:e1\:00.0/ part of the file path is based on the PCI address of the device. In the example it is: 0000:e1:00.0

  3. On the source host, unbind the VF from its driver.

    # virsh nodedev-detach <vf_pci_address> --driver pci-stub

    You can view the PCI address of the VF by using the following command:

    # lshw -c network -businfo
    
    Bus info                     Device             Class           Description
    ===========================================================================
    pci@0000:e1:00.0  enp225s0np0    network        MT2910 Family [ConnectX-7]
    pci@0000:e1:00.1  enp225s0v0     network        ConnectX Family mlx5Gen Virtual Function
  4. On the source host, enable the migration function of the VF.

    # devlink port function set pci/0000:e1:00.0/1 migratable enable

    In this example, pci/0000:e1:00.0/1 refers to the first VF on the Mellanox device with the given PCI address.

  5. On the source host, configure Open vSwitch (OVS) for the migration of the VF. If the Mellanox device is in switchdev mode, it cannot transfer data over the network.

    1. Ensure the openvswitch service is running.

      # systemctl start openvswitch
    2. Enable hardware offloading to improve networking performance.

      # ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
    3. Increase the maximum idle time to ensure network connections remain open during the migration.

      # ovs-vsctl set Open_vSwitch . other_config:max-idle=300000
    4. Create a new bridge in the OVS instance.

      # ovs-vsctl add-br <bridge_name>
    5. Restart the openvswitch service.

      # systemctl restart openvswitch
    6. Add the physical Mellanox device to the OVS bridge.

      # ovs-vsctl add-port <bridge_name> enp225s0np0

      In this example, <bridge_name> is the name of the bridge you created in step d and enp225s0np0 is the network interface name of the Mellanox device.

    7. Add the VF of the Mellanox device to the OVS bridge.

      # ovs-vsctl add-port <bridge_name> enp225s0npf0vf0

      In this example, <bridge_name> is the name of the bridge you created in step d and enp225s0npf0vf0 is the network interface name of the VF.

  6. Repeat steps 1-5 on the destination host.
  7. On the source host, open a new file, such as mlx_vf.xml, and add the following XML configuration of the VF:

     <interface type='hostdev' managed='yes'>
          <mac address='52:54:00:56:8c:f7'/>
          <source>
            <address type='pci' domain='0x0000' bus='0xe1' slot='0x00' function='0x1'/>
          </source>
     </interface>

    This example configures a pass-through of the VF as a network interface for the VM. Ensure the MAC address is unique, and use the PCI address of the VF on the source host.

  8. On the source host, attach the VF XML file to the VM.

    # virsh attach-device <vm_name> mlx_vf.xml --live --config

    In this example, mlx_vf.xml is the name of the XML file with the VF configuration. Use the --live option to attach the device to a running VM.

  9. On the source host, start the live migration of the running VM with the attached VF.

    # virsh migrate --live --domain <vm_name> --desturi qemu+ssh://<destination_host_ip_address>/system

    For more details about performing a live migration, see Migrating a virtual machine by using the command-line interface

Verification

  1. In the migrated VM, view the network interface name of the Mellanox VF.

    # ifconfig
    
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.1.10  netmask 255.255.255.0  broadcast 192.168.1.255
            inet6 fe80::a00:27ff:fe4e:66a1  prefixlen 64  scopeid 0x20<link>
            ether 08:00:27:4e:66:a1  txqueuelen 1000  (Ethernet)
            RX packets 100000  bytes 6543210 (6.5 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 100000  bytes 6543210 (6.5 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    enp4s0f0v0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.3.10  netmask 255.255.255.0  broadcast 192.168.3.255
            inet6 fe80::a00:27ff:fe4e:66c3  prefixlen 64  scopeid 0x20<link>
            ether 08:00:27:4e:66:c3  txqueuelen 1000  (Ethernet)
            RX packets 200000  bytes 12345678 (12.3 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 200000  bytes 12345678 (12.3 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  2. In the migrated VM, check that the Mellanox VF works, for example:

    # ping -I <VF_interface_name> 8.8.8.8
    
    PING 8.8.8.8 (8.8.8.8) from 192.168.3.10 <VF_interface_name>: 56(84) bytes of data.
    64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=27.4 ms
    64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=26.9 ms
    
    --- 8.8.8.8 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1002ms
    rtt min/avg/max/mdev = 26.944/27.046/27.148/0.102 ms

12.7. Live migrating a virtual machine with an attached NVIDIA vGPU

If you use virtual GPUs (vGPUs) in your virtualization workloads, you can live migrate a running virtual machine (VM) with an attached vGPU to another KVM host. Currently, this is only possible with NVIDIA GPUs.

Prerequisites

  • You have an NVIDIA GPU with an NVIDIA Virtual GPU Software Driver version that supports this functionality. Refer to the relevant NVIDIA vGPU documentation for more details.
  • You have a correctly configured NVIDIA vGPU assigned to a VM. For instructions, see: Setting up NVIDIA vGPU devices

    Note

    It is also possible to live migrate a VM with multiple vGPU devices attached.

  • The host uses RHEL 9.4 or later as the operating system.
  • All of the vGPU migration prerequisites that are documented by NVIDIA. Refer to the relevant NVIDIA vGPU documentation for more details.
  • All of the general VM migration prerequisites. For details, see Migrating a virtual machine by using the command-line interface

Limitations

  • Certain NVIDIA GPU features can disable the migration. For more information, see the specific NVIDIA documentation for your graphics card.
  • Some GPU workloads are not compatible with the downtime that happens during a migration. As a consequence, the GPU workloads might stop or crash. It is recommended to test if your workloads are compatible with the downtime before attempting a vGPU live migration.
  • Currently, vGPU live migration fails if the vGPU driver version differs on the source and destination hosts.
  • Currently, some general virtualization features cannot be used when live migrating a VM with an attached vGPU:

    • Calculating dirty memory page rate generation of the VM.

      Currently, live migration data and statistics provided by virsh domjobinfo and virsh domdirtyrate-calc commands are inaccurate when migrating a VM with an attached vGPU, because the calculations only count guest RAM without including vRAM from the vGPU.

    • Using a post-copy live migration.
    • Using a virtual I/O Memory Management Unit (vIOMMU) device in the VM.

Procedure

No additional parameters for the migration command are required for the attached vGPU device.

12.8. Sharing virtual machine disk images with other hosts

To perform a live migration of a virtual machine (VM) between supported KVM hosts, you must also migrate the storage of the running VM in a way that makes it possible for the VM to read from and write to the storage during the migration process.

One of the methods to do this is using shared VM storage. The following procedure provides instructions for sharing a locally stored VM image with the source host and the destination host by using the NFS protocol.

Prerequisites

  • The VM intended for migration is shut down.
  • Optional: A host system is available for hosting the storage that is not the source or destination host, but both the source and the destination host can reach it through the network. This is the optimal solution for shared storage and is recommended by Red Hat.
  • Make sure that NFS file locking is not used as it is not supported in KVM.
  • The NFS protocol is installed and enabled on the source and destination hosts. See Deploying an NFS server.
  • The virt_use_nfs SELinux boolean is set to on.

    # setsebool virt_use_nfs 1

Procedure

  1. Connect to the host that will provide shared storage. In this example, it is the example-shared-storage host:

    # ssh root@example-shared-storage
    root@example-shared-storage's password:
    Last login: Mon Sep 24 12:05:36 2019
    root~#
  2. Create a directory on the example-shared-storage host that will hold the disk image and that will be shared with the migration hosts:

    # mkdir /var/lib/libvirt/shared-images
  3. Copy the disk image of the VM from the source host to the newly created directory. The following example copies the disk image example-disk-1 of the VM to the /var/lib/libvirt/shared-images/ directory of the example-shared-storage host:

    # scp /var/lib/libvirt/images/example-disk-1.qcow2 root@example-shared-storage:/var/lib/libvirt/shared-images/example-disk-1.qcow2
  4. On the host that you want to use for sharing the storage, add the sharing directory to the /etc/exports file. The following example shares the /var/lib/libvirt/shared-images directory with the example-source-machine and example-destination-machine hosts:

    # /var/lib/libvirt/shared-images example-source-machine(rw,no_root_squash) example-destination-machine(rw,no\_root_squash)
  5. Run the exportfs -a command for the changes in the /etc/exports file to take effect.

    # exportfs -a
  6. On both the source and destination host, mount the shared directory in the /var/lib/libvirt/images directory:

    # mount example-shared-storage:/var/lib/libvirt/shared-images /var/lib/libvirt/images

Verification

  • Start the VM on the source host and observe if it boots successfully.

Additional resources

12.9. Verifying host CPU compatibility for virtual machine migration

For migrated virtual machines (VMs) to work correctly on the destination host, the CPUs on the source and the destination hosts must be compatible. To ensure that this is the case, calculate a common CPU baseline before you begin the migration.

Note

The instructions in this section use an example migration scenario with the following host CPUs:

  • Source host: Intel Core i7-8650U
  • Destination hosts: Intel Xeon CPU E5-2620 v2

Prerequisites

  • Virtualization is installed and enabled on your system.
  • You have administrator access to the source host and the destination host for the migration.

Procedure

  1. On the source host, obtain its CPU features and paste them into a new XML file, such as domCaps-CPUs.xml.

    # virsh domcapabilities | xmllint --xpath "//cpu/mode[@name='host-model']" - > domCaps-CPUs.xml
  2. In the XML file, replace the <mode> </mode> tags with <cpu> </cpu>.
  3. Optional: Verify that the content of the domCaps-CPUs.xml file looks similar to the following:

    # cat domCaps-CPUs.xml
    
        <cpu>
              <model fallback="forbid">Skylake-Client-IBRS</model>
              <vendor>Intel</vendor>
              <feature policy="require" name="ss"/>
              <feature policy="require" name="vmx"/>
              <feature policy="require" name="pdcm"/>
              <feature policy="require" name="hypervisor"/>
              <feature policy="require" name="tsc_adjust"/>
              <feature policy="require" name="clflushopt"/>
              <feature policy="require" name="umip"/>
              <feature policy="require" name="md-clear"/>
              <feature policy="require" name="stibp"/>
              <feature policy="require" name="arch-capabilities"/>
              <feature policy="require" name="ssbd"/>
              <feature policy="require" name="xsaves"/>
              <feature policy="require" name="pdpe1gb"/>
              <feature policy="require" name="invtsc"/>
              <feature policy="require" name="ibpb"/>
              <feature policy="require" name="ibrs"/>
              <feature policy="require" name="amd-stibp"/>
              <feature policy="require" name="amd-ssbd"/>
              <feature policy="require" name="rsba"/>
              <feature policy="require" name="skip-l1dfl-vmentry"/>
              <feature policy="require" name="pschange-mc-no"/>
              <feature policy="disable" name="hle"/>
              <feature policy="disable" name="rtm"/>
        </cpu>
  4. On the destination host, use the following command to obtain its CPU features:

    # virsh domcapabilities | xmllint --xpath "//cpu/mode[@name='host-model']" -
    
        <mode name="host-model" supported="yes">
                <model fallback="forbid">IvyBridge-IBRS</model>
                <vendor>Intel</vendor>
                <feature policy="require" name="ss"/>
                <feature policy="require" name="vmx"/>
                <feature policy="require" name="pdcm"/>
                <feature policy="require" name="pcid"/>
                <feature policy="require" name="hypervisor"/>
                <feature policy="require" name="arat"/>
                <feature policy="require" name="tsc_adjust"/>
                <feature policy="require" name="umip"/>
                <feature policy="require" name="md-clear"/>
                <feature policy="require" name="stibp"/>
                <feature policy="require" name="arch-capabilities"/>
                <feature policy="require" name="ssbd"/>
                <feature policy="require" name="xsaveopt"/>
                <feature policy="require" name="pdpe1gb"/>
                <feature policy="require" name="invtsc"/>
                <feature policy="require" name="ibpb"/>
                <feature policy="require" name="amd-ssbd"/>
                <feature policy="require" name="skip-l1dfl-vmentry"/>
                <feature policy="require" name="pschange-mc-no"/>
        </mode>
  5. Add the obtained CPU features from the destination host to the domCaps-CPUs.xml file on the source host. Again, replace the <mode> </mode> tags with <cpu> </cpu> and save the file.
  6. Optional: Verify that the XML file now contains the CPU features from both hosts.

    # cat domCaps-CPUs.xml
    
        <cpu>
              <model fallback="forbid">Skylake-Client-IBRS</model>
              <vendor>Intel</vendor>
              <feature policy="require" name="ss"/>
              <feature policy="require" name="vmx"/>
              <feature policy="require" name="pdcm"/>
              <feature policy="require" name="hypervisor"/>
              <feature policy="require" name="tsc_adjust"/>
              <feature policy="require" name="clflushopt"/>
              <feature policy="require" name="umip"/>
              <feature policy="require" name="md-clear"/>
              <feature policy="require" name="stibp"/>
              <feature policy="require" name="arch-capabilities"/>
              <feature policy="require" name="ssbd"/>
              <feature policy="require" name="xsaves"/>
              <feature policy="require" name="pdpe1gb"/>
              <feature policy="require" name="invtsc"/>
              <feature policy="require" name="ibpb"/>
              <feature policy="require" name="ibrs"/>
              <feature policy="require" name="amd-stibp"/>
              <feature policy="require" name="amd-ssbd"/>
              <feature policy="require" name="rsba"/>
              <feature policy="require" name="skip-l1dfl-vmentry"/>
              <feature policy="require" name="pschange-mc-no"/>
              <feature policy="disable" name="hle"/>
              <feature policy="disable" name="rtm"/>
        </cpu>
        <cpu>
              <model fallback="forbid">IvyBridge-IBRS</model>
              <vendor>Intel</vendor>
              <feature policy="require" name="ss"/>
              <feature policy="require" name="vmx"/>
              <feature policy="require" name="pdcm"/>
              <feature policy="require" name="pcid"/>
              <feature policy="require" name="hypervisor"/>
              <feature policy="require" name="arat"/>
              <feature policy="require" name="tsc_adjust"/>
              <feature policy="require" name="umip"/>
              <feature policy="require" name="md-clear"/>
              <feature policy="require" name="stibp"/>
              <feature policy="require" name="arch-capabilities"/>
              <feature policy="require" name="ssbd"/>
              <feature policy="require" name="xsaveopt"/>
              <feature policy="require" name="pdpe1gb"/>
              <feature policy="require" name="invtsc"/>
              <feature policy="require" name="ibpb"/>
              <feature policy="require" name="amd-ssbd"/>
              <feature policy="require" name="skip-l1dfl-vmentry"/>
              <feature policy="require" name="pschange-mc-no"/>
        </cpu>
  7. Use the XML file to calculate the CPU feature baseline for the VM you intend to migrate.

    # virsh hypervisor-cpu-baseline domCaps-CPUs.xml
    
        <cpu mode='custom' match='exact'>
          <model fallback='forbid'>IvyBridge-IBRS</model>
          <vendor>Intel</vendor>
          <feature policy='require' name='ss'/>
          <feature policy='require' name='vmx'/>
          <feature policy='require' name='pdcm'/>
          <feature policy='require' name='pcid'/>
          <feature policy='require' name='hypervisor'/>
          <feature policy='require' name='arat'/>
          <feature policy='require' name='tsc_adjust'/>
          <feature policy='require' name='umip'/>
          <feature policy='require' name='md-clear'/>
          <feature policy='require' name='stibp'/>
          <feature policy='require' name='arch-capabilities'/>
          <feature policy='require' name='ssbd'/>
          <feature policy='require' name='xsaveopt'/>
          <feature policy='require' name='pdpe1gb'/>
          <feature policy='require' name='invtsc'/>
          <feature policy='require' name='ibpb'/>
          <feature policy='require' name='amd-ssbd'/>
          <feature policy='require' name='skip-l1dfl-vmentry'/>
          <feature policy='require' name='pschange-mc-no'/>
        </cpu>
  8. Open the XML configuration of the VM you intend to migrate, and replace the contents of the <cpu> section with the settings obtained in the previous step.

    # virsh edit <vm_name>
  9. If the VM is running, shut down the VM and start it again.

    # virsh shutdown <vm_name>
    
    # virsh start <vm_name>

12.10. Supported hosts for virtual machine migration

For the virtual machine (VM) migration to work properly and be supported by Red Hat, the source and destination hosts must be specific RHEL versions and machine types. The following table shows supported VM migration paths.

Table 12.2. Live migration compatibility
Migration methodRelease typeFuture version exampleSupport status

Forward

Minor release

9.0.1 9.1

On supported RHEL 9 systems: machine type q35.

Backward

Minor release

9.1 9.0.1

On supported RHEL 9 systems: machine type q35.

Note

Support level is different for other virtualization solutions provided by Red Hat, including RHOSP and OpenShift Virtualization.

Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.