Chapter 12. Managing storage for virtual machines


A virtual machine (VM), just like a physical machine, requires storage for data, program, and system files. As a VM administrator, you can assign physical or network-based storage to your VMs as virtual storage. You can also modify how the storage is presented to a VM regardless of the underlying hardware.

12.1. Available methods for attaching storage to virtual machines

To provide storage for your virtual machines (VMs) running on a RHEL 10 host, you can use multiple types of storage hardware and services. Each of these types has different requirements, benefits, and use cases.

File-based storage

File-based virtual disks are disk image files on your host file system, which are stored in a directory-based libvirt storage pool.

File-based disks are quick to set up and easy to migrate, but create additional overhead for the local file system, which can have negative impact on the performance.

In addition, certain libvirt features, such as snapshots, require a file-based virtual disk.

For instructions on attaching file-based storage to your VMs, see Attaching a file-based virtual disk to your virtual machine by using the command line or Attaching a file-based virtual disk to your virtual machine by using the web console.

Disk-based storage

VMs can use an entire physical disk or partition instead of virtual disks.

Disk-based storage has the best performance of the available storage types and also provides direct access to host disks. However, you cannot create snapshots for such storage, and it is difficult to migrate.

For instructions on attaching disk-based storage to your VMs, see Attaching disk-based storage to your virtual machine by using the command line or Attaching disk-based storage to your virtual machine by using the web console.

LVM-based storage

VMs can use the Logical Volume Manager (LVM) to allocate storage directly from a volume group (VG).

LVM storage has better performance than file-based disks and is easy to resize, but can be more difficult to migrate.

For instructions on attaching LVM-based storage to your VMs, see Attaching LVM-based storage to your virtual machine by using the command line or Attaching LVM-based storage to your virtual machine by using the web console.

Network-based storage

Instead of local hardware, you can use remote storage, such as the Network File System (NFS).

This is useful for shared storage in clusters or high-availability environments. However, network-based storage is generally slower than local storage, and your network bandwidth can further limit the performance.

For instructions on attaching NFS-based storage to your VMs, see Attaching NFS-based storage to your virtual machine by using the command line or Attaching NFS-based storage to your virtual machine by using the web console.

12.2. Viewing virtual machine storage information by using the web console

By using the web console, you can view detailed information about storage resources available to your virtual machines (VMs).

Prerequisites

Procedure

  1. Log in to the RHEL 10 web console.

    For details, see Logging in to the web console.

  2. To view a list of the storage pools available on your host, click Storage Pools at the top of the Virtual Machines interface.

    The Storage pools window appears, showing a list of configured storage pools.

    The information includes the following:

    • Name - The name of the storage pool.
    • Size - The current allocation and the total capacity of the storage pool.
    • Connection - The connection used to access the storage pool.
    • State - The state of the storage pool.
  3. Click the arrow next to the storage pool whose information you want to see.

    The row expands to reveal the Overview pane with detailed information about the selected storage pool.

    The information includes:

    • Target path - The location of the storage pool.
    • Persistent - Indicates whether or not the storage pool has a persistent configuration.
    • Autostart - Indicates whether or not the storage pool starts automatically when the system boots up.
    • Type - The type of the storage pool.
  4. To view a list of storage volumes associated with the storage pool, click Storage Volumes.

    The Storage Volumes pane appears, showing a list of configured storage volumes.

    The information includes:

    • Name - The name of the storage volume.
    • Used by - The VM that is currently using the storage volume.
    • Size - The size of the volume.
  5. To view virtual disks attached to a specific VM:

    1. Click Virtual machines in the left-side menu.
    2. Click the VM whose information you want to see.

      A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  6. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM, as well as options to Add or Edit disks.

    The information includes the following:

    • Device - The device type of the disk.
    • Used - The amount of disk currently allocated.
    • Capacity - The maximum size of the storage volume.
    • Bus - The type of disk device that is emulated.
    • Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the access to Writeable and shared.
    • Source - The disk device or file.

12.3. Viewing virtual machine storage information by using the command line

By using the command line, you can view detailed information about storage resources available to your virtual machines (VMs).

Procedure

  1. To view the available storage pools on the host, run the virsh pool-list command with options for the required granularity of the list. For example, the following options display all available information about all storage pools on your host:

    # virsh pool-list --all --details
    
     Name                State    Autostart  Persistent    Capacity  Allocation   Available
     default             running  yes        yes          48.97 GiB   23.93 GiB   25.03 GiB
     Downloads           running  yes        yes         175.62 GiB   62.02 GiB  113.60 GiB
     RHEL-Storage-Pool   running  yes        yes         214.62 GiB   93.02 GiB  168.60 GiB
    Copy to Clipboard
    • For additional options available for viewing storage pool information, use the virsh pool-list --help command.
  2. To list the storage volumes in a specified storage pool, use the virsh vol-list command.

    # virsh vol-list --pool <RHEL-Storage-Pool> --details
     Name                Path                                               Type   Capacity  Allocation
    ---------------------------------------------------------------------------------------------
    
      RHEL_Volume.qcow2   /home/VirtualMachines/RHEL8_Volume.qcow2  file  60.00 GiB   13.93 GiB
    Copy to Clipboard
  3. To view all block devices attached to a virtual machine, use the virsh domblklist command.

    # *virsh domblklist --details <vm-name>
    
     Type   Device   Target   Source
    -----------------------------------------------------------------------------
     file   disk     hda      /home/VirtualMachines/vm-name.qcow2
     file   cdrom    hdb      -
     file   disk     vdc      /home/VirtualMachines/test-disk2.qcow2
    Copy to Clipboard

12.4. Attaching storage to virtual machines

To add storage to a virtual machine (VM), you can attach a storage resource to the VM as a virtual disk. Similarly to physical storage devices, virtual disks are independent from the VMs that they are attached to, and can be moved to other VMs.

You can use multiple types of storage resources to add a virtual disk to a VM.

12.4.1. Attaching a file-based virtual disk to your virtual machine by using the command line

To provide local storage for a virtual machine, the easiest option typically is to attach a file-based virtual disk with the .qcow2 or .raw format.

To do so on the command line, you can use one of the following methods:

  • Create a file-based storage volume in a directory-based storage pool managed by libvirt. This requires multiple steps, but provides better integration with the hypervisor.

    Note that a default directory-based storage volume is created automatically when creating the first VM on your RHEL 10 host. The name of this storage pool is based on the name of the directory in which you save the disk image. For example, by default, in the system session of libvirt, the disk image is saved in the /var/lib/libvirt/images/ directory and the storage pool is named images.

  • Use the qemu-img command to create a virtual disk as a file on the host file system. This is a faster method, but does not provide integration with libvirt.

    As a result, virtual disks created by using qemu-img are more difficult to manage after creation.

Note

A file-based virtual disk can also be created and attached when creating a new VM on the command line. To do so, use the --disk option with the virt-install utility. For detailed instructions, see Creating virtual machines.

Procedure

  1. Optional: If you want to create a virtual disk as a storage volume, but you do not want to use the default images storage pool or another existing storage pool on the host, create and set up a new directory-based storage pool.

    1. Configure a directory-type storage pool. For example, to create a storage pool named guest_images_dir that uses the /guest_images directory:

      # virsh pool-define-as guest_images_dir dir --target "/guest_images"
      Pool guest_images_dir defined
      Copy to Clipboard
    2. Create a target path for the storage pool based on the configuration you previously defined.

      # virsh pool-build guest_images_dir
        Pool guest_images_dir built
      Copy to Clipboard
    3. Start the storage pool.

      # virsh pool-start guest_images_dir
        Pool guest_images_dir started
      Copy to Clipboard
    4. Optional: Set the storage pool to start on host boot.

      # virsh pool-autostart guest_images_dir
        Pool guest_images_dir marked as autostarted
      Copy to Clipboard
    5. Optional: Verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

      # virsh pool-info guest_images_dir
        Name:           guest_images_dir
        UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
        State:          running
        Persistent:     yes
        Autostart:      yes
        Capacity:       458.39 GB
        Allocation:     197.91 MB
        Available:      458.20 GB
      Copy to Clipboard
  2. Create a file-based virtual disk. To do so, use one of the following methods:

    • To quickly create a file-based VM disk not managed by libvirt, use the qemu-img utility.

      For example, the following command creates a qcow2 disk image named test-image with the size of 30 gigabytes:

      # qemu-img create -f qcow2 test-image 30G
      
      Formatting 'test-image', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=32212254720 lazy_refcounts=off refcount_bits=16
      Copy to Clipboard
    • To create a file-based VM disk managed by libvirt, define the disk as a storage volume based on an existing directory-based storage pool.

      For example, the following command creates a 20 GB qcow2 volume named vm-disk1 and based on the guest_images_dir storage pool:

      # virsh vol-create-as --pool guest_images_dir --name vm-disk1 --capacity 20GB --format qcow2
      
      Vol vm-disk1 created
      Copy to Clipboard
  3. Locate the virtual disk that you created:

    • For a VM disk created with qemu-img, this is typically your current directory.
    • For a storage volume, examine the storage pool that the volume belongs to:

      # virsh vol-list --pool guest_images_dir --details
      
       Name        Path                          Type   Capacity    Allocation
      --------------------------------------------------------------------------
       vm-disk1    /guest-images/vm-disk1      file   20.00 GiB   196.00 KiB
      Copy to Clipboard
  4. Find out which target devices are already used in the VM to which you want to attach the disk:

    # virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda      /home/VirtualMachines/vm-name.qcow2
     file   cdrom    vdb      -
    Copy to Clipboard
  5. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
  6. Attach the disk to a VM by using the virsh attach-disk command. Provide a target device that is not in use in the VM.

    For example, the following command attaches the previously created test-disk1 as the vdc device to the testguest1 VM:

    # virsh attach-disk testguest1 /guest-images/vm-disk1 vdc --persistent
    Copy to Clipboard

Verification

  1. Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.

    # virsh dumpxml testguest1
    
    ...
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' discard='unmap'/>
          <source file='/guest-images/vm-disk1' index='1'/>
          <backingStore/>
          <target dev='vdc' bus='virtio'/>
          <alias name='virtio-disk2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
    ...
    Copy to Clipboard
  2. In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.2. Attaching a file-based virtual disk to your virtual machine by using the web console

To provide local storage for a virtual machine, the easiest option typically is to attach a file-based virtual disk with the .qcow2 or .raw format.

To do so, create a file-based storage volume in a directory-based storage pool managed by libvirt. A default directory-based storage volume is created automatically when creating the first VM on your RHEL 10 host. The name of this storage pool is based on the name of the directory in which you save the disk image. For example, by default, in the system session of libvirt, the disk image is saved in the /var/lib/libvirt/images/ directory and the storage pool is named images.

Note

A file-based virtual disk can also be created and attached when creating a new VM in the web console. To do so, use the Storage option in the Create virtual machine dialog. For detailed instructions, see creating virtual machines by using the web console.

Prerequisites

Procedure

  1. Log in to the RHEL 10 web console.

    For details, see Logging in to the web console.

  2. Optional: If you do not want to use the default images storage pool to create a new virtual disk, create a new storage pool.

    1. Click Storage Pools at the top of the Virtual Machines interface. Create storage pool.
    2. In the Create Storage Pool dialog, enter a name for the storage pool.
    3. In the Type drop-down menu, select Filesystem directory.
    4. Enter the following information:

      • Target path - The location of the storage pool.
      • Startup - Whether or not the storage pool starts when the host boots.
    5. Click Create.

      The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.

  3. Create a new storage volume based on an existing storage pool.

    1. In the Storage Pools window, click the storage pool from which you want to create a storage volume. Storage Volumes Create volume.
    2. Enter the following information in the Create Storage Volume dialog:

      • Name - The name of the storage volume.
      • Size - The size of the storage volume in MiB or GiB.
      • Format - The format of the storage volume. The supported types are qcow2 and raw.
    3. Click Create.
  4. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
  5. Add the created storage volume as a disk to a VM.

    1. In the Virtual Machines interface, click the VM for which you want to create and attach the new disk.

      A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

    2. Scroll to Disks.
    3. In the Disks section, click Add disk.
    4. In the Add disks dialog, select Use existing.
    5. Select the storage pool and storage volume that you want to use for the disk.
    6. Select whether or not the disk will be persistent

      Note

      Transient disks can only be added to VMs that are running.

    7. Optional: Click Show additional options and adjust the cache type, bus type, and disk identifier of the storage volume.
    8. Click Add.

Verification

  • In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.3. Attaching disk-based storage to your virtual machine by using the command line

To provide local storage for a virtual machine (VM), you can use a disk-based disk image. This type of disk image is based on a disk partition on your host and uses the .qcow2 or .raw format.

To attach disk-based storage to a VM by using the command line, use one of the following methods:

  • When creating a new VM, create and attach a new disk as a part of the virt-install command, by using the --disk option. For detailed instructions, see Creating virtual machines.
  • For an existing VM, create a disk-based storage volume and attach it to the VM. For instructions, see the following procedure.

Prerequisites

  • Ensure your hypervisor supports disk-based storage pools:

    # virsh pool-capabilities | grep "'disk' supported='yes'"
    Copy to Clipboard

    If the command displays any output, disk-based pools are supported.

  • Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups on it. This can result in system errors on the host.

    However, if you require using an entire block device for the storage pool, Red Hat recommends protecting any important partitions on the device from GRUB’s os-prober function. To do so, edit the /etc/default/grub file and apply one of the following configurations:

    • Disable os-prober.

      GRUB_DISABLE_OS_PROBER=true
      Copy to Clipboard
    • Prevent os-prober from discovering the partition that you want to use. For example:

      GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
      Copy to Clipboard
  • Back up any data on the selected storage device before creating a storage pool. Depending on the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device.

Procedure

  1. Create and set up a new disk-based storage pool, if you do not already have one.

    1. Define and create a disk-type storage pool. The following example creates a storage pool named guest_images_disk that uses the /dev/sdb device and is mounted on the /dev directory.

      # virsh pool-define-as guest_images_disk disk --source-format=gpt --source-dev=/dev/sdb --target /dev
      Pool guest_images_disk defined
      Copy to Clipboard
    2. Create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.

      # virsh pool-build guest_images_disk
        Pool guest_images_disk built
      Copy to Clipboard
    3. Optional: Verify that the pool was created.

      # virsh pool-list --all
      
        Name                 State      Autostart
        -----------------------------------------
        default              active     yes
        guest_images_disk    inactive   no
      Copy to Clipboard
    4. Start the storage pool.

      # virsh pool-start guest_images_disk
        Pool guest_images_disk started
      Copy to Clipboard
      Note

      The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

    5. Optional: Turn on autostart.

      By default, a storage pool defined with virsh is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

      # virsh pool-autostart guest_images_disk
        Pool guest_images_disk marked as autostarted
      Copy to Clipboard
  2. Create a disk-based storage volume. For example, the following command creates a 20 GB qcow2 volume named vm-disk1 and based on the guest_images_disk storage pool:

    # virsh vol-create-as --pool guest_images_disk --name sdb1 --capacity 20GB --format extended
    
    Vol vm-disk1 created
    Copy to Clipboard
  3. Attach the storage volume as a virtual disk to a VM.

    1. Locate the storage volume that you created. To do so, examine the storage pool that the volume belongs to:

      # virsh vol-list --pool guest_images_disk --details
      
       Name        Path                      Type   Capacity    Allocation
      ---------------------------------------------------------------------
       sdb1      /dev/sdb1                  block   20.00 GiB   20.00 GiB
      Copy to Clipboard
    2. Find out which target devices are already used in the VM to which you want to attach the disk:

      # virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda      /home/VirtualMachines/vm-name.qcow2
       file   cdrom    vdb      -
      Copy to Clipboard
    3. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
    4. Attach the disk to a VM by using the virsh attach-disk command. Provide a target device that is not in use in the VM.

      For example, the following command attaches the previously created vm-disk1 as the vdc device to the testguest1 VM:

      # virsh attach-disk testguest1 /dev/sdb1 vdc --persistent
      Copy to Clipboard

Verification

  1. Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.

    # virsh dumpxml testguest1
    
    ...
      <disk type="block" device="disk">
        <driver name="qemu" type="raw"/>
        <source dev="/dev/sdb1" index="2"/>
        <backingStore/>
        <target dev="vdc" bus="virtio"/>
        <alias name="virtio-disk2"/>
        <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
      </disk>
    ...
    Copy to Clipboard
  2. In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.4. Attaching disk-based storage to your virtual machine by using the web console

To provide local storage for a virtual machine, the easiest option typically is to attach a file-based virtual disk with the .qcow2 or .raw format.

To attach disk-based storage to a VM by using the web console, use one of the following methods:

  • When creating a new VM, create and attach a new disk by using the Storage option in the Create virtual machine dialog. For detailed instructions, see Creating virtual machines by using the web console.
  • For an existing VM, create a disk-based storage volume and attach it to the VM. For instructions, see the following procedure.

Prerequisites

Procedure

  1. Log in to the RHEL 10 web console.

    For details, see Logging in to the web console.

  2. Create and set up a new disk-based storage pool, if you do not already have one.

    1. Click Storage Pools at the top of the Virtual Machines interface. Create storage pool.
    2. In the Create Storage Pool dialog, enter a name for the storage pool.
    3. In the Type drop-down menu, select Physical disk device.

      Note

      If you do not see the Physical disk device option in the drop-down menu, then your hypervisor does not support disk-based storage pools.

    4. Enter the following information:

      • Target Path - The path specifying the target device. This will be the path used for the storage pool.
      • Source path - The path specifying the storage device. For example, /dev/sdb.
      • Format - The type of the partition table.
      • Startup - Whether or not the storage pool starts when the host boots.
    5. Click Create.

      The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.

  3. Create a new storage volume based on an existing storage pool.

    1. In the Storage Pools window, click the storage pool from which you want to create a storage volume. Storage Volumes Create volume.
    2. Enter the following information in the Create Storage Volume dialog:

      • Name - The name of the storage volume.
      • Size - The size of the storage volume in MiB or GiB.
      • Format - The format of the storage volume.
    3. Click Create.
  4. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
  5. Add the created storage volume as a disk to a VM.

    1. In the Virtual Machines interface, click the VM for which you want to create and attach the new disk.

      A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

    2. Scroll to Disks.
    3. In the Disks section, click Add disk.
    4. In the Add disks dialog, select Use existing.
    5. Select the storage pool and storage volume that you want to use for the disk.
    6. Select whether or not the disk will be persistent

      Note

      Transient disks can only be added to VMs that are running.

    7. Optional: Click Show additional options and adjust the cache type, bus type, and disk identifier of the storage volume.
    8. Click Add.

Verification

  • In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.5. Attaching LVM-based storage to your virtual machine by using the command line

To provide local storage for a virtual machine (VM), you can use an LVM-based storage volume. This type of disk image is based on an LVM volume group, and uses the .qcow2 or .raw format.

To attach LVM-based storage to a VM by using the command line, use one of the following methods:

  • When creating a new VM, create and attach a new disk by using the Storage option in the Create virtual machine dialog. For detailed instructions, see Creating virtual machines by using the web console.
  • For an existing VM, create an LVM-based storage volume and attach it to the VM. For instructions, see the following procedure.

Considerations

Note that LVM-based storage volumes have certain limitations:

  • LVM-based storage pools do not provide the full flexibility of LVM.
  • LVM-based storage pools are volume groups. You can create volume groups by using the virsh utility, but this way you can only have one device in the created volume group. To create a volume group with multiple devices, use the LVM utility instead, see How to create a volume group in Linux with LVM.
  • LVM-based storage pools require a full disk partition. If you activate a new partition or device by using virsh commands, the partition will be formatted and all data will be erased. If you are using a host’s existing volume group, as in these procedures, nothing will be erased.

Prerequisites

  • Ensure your hypervisor supports LVM-based storage pools:

    # virsh pool-capabilities | grep "'logical' supported='yes'"
    Copy to Clipboard

    If the command displays any output, LVM-based pools are supported.

  • Make sure an LVM volume group exists on your host. For instructions on creating one, see Creating an LVM volume group.
  • Back up any data on the selected storage device before creating a storage pool. Dedicating a disk partition to a storage pool will reformat and erase all data currently stored on the disk device.

Procedure

  1. Create and set up a new LVM-based storage pool, if you do not already have one.

    1. Define an LVM-type storage pool. For example, the following command defines a storage pool named guest_images_lvm that uses the lvm_vg volume group and is mounted on the /dev/lvm_vg directory:

      # virsh pool-define-as guest_images_lvm logical --source-dev /dev/sdb --target /dev/lvm_vg
      Pool guest_images_lvm defined
      Copy to Clipboard
    2. Create a storage pool based on the configuration you previously defined.

      # virsh pool-build guest_images_lvm
        Pool guest_images_lvm built
      Copy to Clipboard
    3. Optional: Verify that the pool was created.

      # virsh pool-list --all
      
        Name                   State      Autostart
        -------------------------------------------
        default                active     yes
        guest_images_lvm       inactive   no
      Copy to Clipboard
    4. Start the storage pool.

      # virsh pool-start guest_images_lvm
        Pool guest_images_lvm started
      Copy to Clipboard
      Note

      The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

    5. Optional: Turn on autostart.

      By default, a storage pool defined with virsh is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

      # virsh pool-autostart guest_images_lvm
        Pool guest_images_lvm marked as autostarted
      Copy to Clipboard
  2. Create an LVM-based storage volume. For example, the following command creates a 20 GB qcow2 volume named vm-disk1 and based on the guest_images_lvm storage pool:

    # virsh vol-create-as --pool guest_images_lvm --name vm-disk1 --capacity 20GB --format qcow2
    
    Vol vm-disk1 created
    Copy to Clipboard
  3. Attach the storage volume as a virtual disk to a VM.

    1. Locate the storage volume that you created. To do so, examine the storage pool that the volume belongs to:

      # virsh vol-list --pool guest_images_lvm --details
      
       Name        Path                            Type   Capacity    Allocation
      -----------------------------------------------------------------------------
       vm-disk1   /dev/guest_images_lvm/vm-disk1   block   20.00 GiB   196.00 KiB
      Copy to Clipboard
    2. Find out which target devices are already used in the VM to which you want to attach the disk:

      # virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda      /home/VirtualMachines/vm-name.qcow2
       file   cdrom    vdb      -
      Copy to Clipboard
    3. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
    4. Attach the disk to a VM by using the virsh attach-disk command. Provide a target device that is not in use in the VM.

      For example, the following command attaches the previously created vm-disk1 as the vdc device to the testguest1 VM:

      # virsh attach-disk testguest1 /dev/guest_images_lvm/vm-disk1 vdc --persistent
      Copy to Clipboard

Verification

  1. Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.

    # virsh dumpxml testguest1
    
    ...
        <disk type="block" device="disk">
          <driver name="qemu" type="raw"/>
          <source dev="/dev/guest_images_lvm/vm-disk1" index="3"/>
          <backingStore/>
          <target dev="vdc" bus="virtio"/>
          <alias name="virtio-disk2"/>
          <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
        </disk>
    
    ...
    Copy to Clipboard
  2. In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.6. Attaching LVM-based storage to your virtual machine by using the web console

To provide local storage for a virtual machine (VM), you can use an LVM-based storage volume. This type of disk image is based on an LVM volume group, and uses the .qcow2 or .raw format.

To attach disk-based storage to a VM by using the web console, use one of the following methods:

  • When creating a new VM, create and attach a new disk by using the Storage option in the Create virtual machine dialog. For detailed instructions, see Creating virtual machines by using the web console.
  • For an existing VM, create an LVM-based storage volume and attach it to the VM. For instructions, see the following procedure.

Considerations

Note that LVM-based storage volumes have certain limitations:

  • LVM-based storage pools do not provide the full flexibility of LVM.
  • LVM-based storage pools are volume groups. You can create volume groups by using the virsh utility, but this way you can only have one device in the created volume group. To create a volume group with multiple devices, use the LVM utility instead, see How to create a volume group in Linux with LVM.
  • LVM-based storage pools require a full disk partition. If you activate a new partition or device by using virsh commands, the partition will be formatted and all data will be erased. If you are using a host’s existing volume group, as in these procedures, nothing will be erased.

Prerequisites

Procedure

  1. Log in to the RHEL 10 web console.

    For details, see Logging in to the web console.

  2. Create and set up a new directory-based storage pool, if you do not already have one.

    1. Click Storage Pools at the top of the Virtual Machines interface. Create storage pool.
    2. In the Create Storage Pool dialog, enter a name for the storage pool.
    3. In the Type drop-down menu, select LVM volume group.

      Note

      If you do not see the LVM volume group option in the drop-down menu, then your hypervisor does not support disk-based storage pools.

    4. Enter the following information:

      • Source volume group - The name of the LVM volume group that you wish to use.
      • Startup - Whether or not the storage pool starts when the host boots.
    5. Click Create.

      The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.

  3. Create a new storage volume based on an existing storage pool.

    1. In the Storage Pools window, click the storage pool from which you want to create a storage volume. Storage Volumes Create volume.
    2. Enter the following information in the Create Storage Volume dialog:

      • Name - The name of the storage volume.
      • Size - The size of the storage volume in MiB or GiB.
      • Format - The format of the storage volume. The supported types are qcow2 and raw.
    3. Click Create.
  4. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
  5. Add the created storage volume as a disk to a VM.

    1. In the Virtual Machines interface, click the VM for which you want to create and attach the new disk.

      A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

    2. Scroll to Disks.
    3. In the Disks section, click Add disk.
    4. In the Add disks dialog, select Use existing.
    5. Select the storage pool and storage volume that you want to use for the disk.
    6. Select whether or not the disk will be persistent

      Note

      Transient disks can only be added to VMs that are running.

    7. Optional: Click Show additional options and adjust the cache type, bus type, and disk identifier of the storage volume.
    8. Click Add.

Verification

  • In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.7. Attaching NFS-based storage to your virtual machine by using the command line

To provide networke storage for a virtual machine (VM), you can use a storage volume based on a Network File Sytem (NFS) server.

To attach NFS-based storage to a VM by using the command line, use one of the following methods:

  • When creating a new VM, create and attach a new disk by using the Storage option in the Create virtual machine dialog. For detailed instructions, see Creating virtual machines by using the web console.
  • For an existing VM, create an NFS-based storage volume and attach it to the VM. For instructions, see the following procedure.

Prerequisites

  • Ensure your hypervisor supports NFS-based storage pools:

    # virsh pool-capabilities | grep "<value>nfs</value>"
    Copy to Clipboard

    If the command displays any output, NFS-based pools are supported.

  • You must have an available NFS that you can use. For details, see Mounting NFS shares

Procedure

  1. Create and set up a new NFS-based storage pool, if you do not already have one.

    1. Define and create an NFS-type storage pool. For example, to create a storage pool named guest_images_netfs that uses an NFS server with IP 111.222.111.222 mounted on the server directory /home/net_mount by using the target directory /var/lib/libvirt/images/nfspool:

      # virsh pool-define-as --name guest_images_netfs \
         --type netfs --source-host='111.222.111.222' \
         --source-path='/home/net_mount' --source-format='nfs' \
         --target='/var/lib/libvirt/images/nfspool'
      
      Pool guest_images_netfs defined
      Copy to Clipboard
    2. Create a storage pool based on the configuration you previously defined.

      # virsh pool-build guest_images_netfs
        Pool guest_images_netfs built
      Copy to Clipboard
    3. Optional: Verify that the pool was created.

      # virsh pool-list --all
      
        Name                   State      Autostart
        -------------------------------------------
        default                active     yes
        guest_images_netfs     inactive   no
      Copy to Clipboard
    4. Start the storage pool.

      # virsh pool-start guest_images_netfs
        Pool guest_images_netfs started
      Copy to Clipboard
    5. Optional: Turn on autostart.

      By default, a storage pool defined with virsh is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

      # virsh pool-autostart guest_images_netfs
        Pool guest_images_netfs marked as autostarted
      Copy to Clipboard
  2. Create an NFS-based storage volume. For example, the following command creates a 20 GB qcow2 volume named vm-disk1 and based on the guest_images_netfs storage pool:

    # virsh vol-create-as --pool guest_images_netfs --name vm-disk1 --capacity 20GB --format qcow2
    
    Vol vm-disk1 created
    Copy to Clipboard
  3. Attach the storage volume as a virtual disk to a VM.

    1. Locate the storage volume that you created. To do so, examine the storage pool that the volume belongs to:

      # virsh vol-list --pool guest_images_netfs --details
      
       Name        Path                                       Type   Capacity    Allocation
      -------------------------------------------------------------------------------------
       vm-disk1   /var/lib/libvirt/images/nfspool/vm-disk1    file  20.00 GiB   196.00 KiB
      Copy to Clipboard
    2. Find out which target devices are already used in the VM to which you want to attach the disk:

      # virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda      /home/VirtualMachines/vm-name.qcow2
       file   cdrom    vdb      -
      Copy to Clipboard
    3. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
    4. Attach the disk to a VM by using the virsh attach-disk command. Provide a target device that is not in use in the VM.

      For example, the following command attaches the previously created vm-disk1 as the vdc device to the testguest1 VM:

      # virsh attach-disk testguest1 /var/lib/libvirt/images/nfspool/vm-disk1 vdc --persistent
      Copy to Clipboard

Verification

  1. Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.

    # virsh dumpxml testguest1
    
    ...
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' discard='unmap'/>
          <source file='/var/lib/libvirt/images/nfspool/vm-disk1' index='1'/>
          <backingStore/>
          <target dev='vdc' bus='virtio'/>
          <alias name='virtio-disk2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
    ...
    Copy to Clipboard
  2. In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.4.8. Attaching NFS-based storage to your virtual machine by using the web console

To provide networke storage for a virtual machine (VM), you can use a storage volume based on a Network File Sytem (NFS) server.

To attach NFS-based storage to a VM by using the web console, use one of the following methods:

  • When creating a new VM, create and attach a new disk by using the Storage option in the Create virtual machine dialog. For detailed instructions, see Creating virtual machines by using the web console.
  • For an existing VM, create an NFS-based storage volume and attach it to the VM. For instructions, see the following procedure.

Prerequisites

Procedure

  1. Log in to the RHEL 10 web console.

    For details, see Logging in to the web console.

  2. Create and set up a new NFS-based storage pool, if you do not already have one.

    1. Click Storage Pools at the top of the Virtual Machines interface. Create storage pool.
    2. In the Create Storage Pool dialog, enter a name for the storage pool.
    3. In the Type drop-down menu, select Network file system.

      Note

      If you do not see the Network file system option in the drop-down menu, then your hypervisor does not support NFS-based storage pools.

    4. Enter the following information:

      • Target path - The path specifying the target. This will be the path used for the storage pool.
      • Host - The hostname of the network server where the mount point is located. This can be a hostname or an IP address.
      • Source path - The directory used on the network server.
      • Startup - Whether or not the storage pool starts when the host boots.
    5. Click Create.

      The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.

  3. Create a new storage volume based on an existing storage pool.

    1. In the Storage Pools window, click the storage pool from which you want to create a storage volume. Storage Volumes Create volume.
    2. Enter the following information in the Create Storage Volume dialog:

      • Name - The name of the storage volume.
      • Size - The size of the storage volume in MiB or GiB.
      • Format - The format of the storage volume. The supported types are qcow2 and raw.
    3. Click Create.
  4. Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
  5. Add the created storage volume as a disk to a VM.

    1. In the Virtual Machines interface, click the VM for which you want to create and attach the new disk.

      A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

    2. Scroll to Disks.
    3. In the Disks section, click Add disk.
    4. In the Add disks dialog, select Use existing.
    5. Select the storage pool and storage volume that you want to use for the disk.
    6. Select whether or not the disk will be persistent

      Note

      Transient disks can only be added to VMs that are running.

    7. Optional: Click Show additional options and adjust the cache type, bus type, and disk identifier of the storage volume.
    8. Click Add.

Verification

  • In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

12.5. Checking the consistency of a virtual disk

Before attaching a disk image to a virtual machine (VM), ensure that the disk image does not have problems, such as corruption or high fragmentation. To do so, you can use the qemu-img check command.

If needed, you can also use this command to attempt repairing the disk image.

Prerequisites

  • Any virtual machines (VMs) that use the disk image must be shut down.

Procedure

  1. Use the qemu-img check command on the image you want to test. For example:

    # qemu-img check <test-name.qcow2>
    
    No errors were found on the image.
    327434/327680 = 99.92% allocated, 0.00% fragmented, 0.00% compressed clusters
    Image end offset: 21478375424
    Copy to Clipboard

    If the check finds problems on the disk image, the output of the command looks similar to the following:

    167 errors were found on the image.
    Data may be corrupted, or further writes to the image may corrupt it.
    
    453368 leaked clusters were found on the image.
    This means waste of disk space, but no harm to data.
    
    259 internal errors have occurred during the check.
    Image end offset: 21478375424
    Copy to Clipboard
  2. To attempt repairing the detected issues, use the qemu-img check command with the -r all option. Note, however, that this might fix only some of the problems.

    Warning

    Repairing the disk image can cause data corruption or other issues. Back up the disk image before attempting the repair.

    # qemu-img check -r all <test-name.qcow2>
    
    [...]
    122 errors were found on the image.
    Data may be corrupted, or further writes to the image may corrupt it.
    
    250 internal errors have occurred during the check.
    Image end offset: 27071414272
    Copy to Clipboard

    This output indicates the number of problems found on the disk image after the repair.

  3. If further disk image repairs are required, you can use various libguestfs tools in the guestfish shell.

12.6. Resizing a virtual disk

If an existing disk image requires additional space, you can use the qemu-img resize utility to change the size of the image to fit your use case.

Prerequisites

  • You have created a backup of the disk image.
  • Any virtual machines (VMs) that use the disk image must be shutdown.

    Warning

    Resizing the disk image of a running VM can cause data corruption or other issues.

  • The hard disk of the host has sufficient free space for the intended disk image size.
  • Optional: You have ensured that the disk image does not have data corruption or similar problems. For instructions, see Checking the consistency of a virtual disk.

Procedure

  1. Determine the location of the disk image file for the VM you want to resize. For example:

    # virsh domblklist <vm-name>
    
     Target   Source
    ----------------------------------------------------------
     vda      /home/username/disk-images/example-image.qcow2
    Copy to Clipboard
  2. Optional: Back up the current disk image.

    # cp <example-image.qcow2> <example-image-backup.qcow2>
    Copy to Clipboard
  3. Use the qemu-img resize utility to resize the image.

    For example, to increase the <example-image.qcow2> size by 10 gigabytes:

    # qemu-img resize <example-image.qcow2> +10G
    Copy to Clipboard
  4. Resize the file system, partitions, or physical volumes inside the disk image to use the additional space. To do so in a RHEL guest operating system, use the instructions in Managing storage devices and Managing file systems.

Verification

  1. Display information about the resized image and see if it has the intended size:

    # qemu-img info <converted-image.qcow2>
    
    image: converted-image.qcow2
    file format: qcow2
    virtual size: 30 GiB (32212254720 bytes)
    disk size: 196 KiB
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        compression type: zlib
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
        extended l2: false
    Copy to Clipboard
  2. Check the resized disk image for potential errors. For instructions, see Checking the consistency of a virtual disk.

12.7. Converting between virtual disk formats

You can convert the virtual disk image to a different format by using the qemu-img convert command. For example, converting between virtual disk image formats might be necessary if you want to attach the disk image to a virtual machine (VM) running on a different hypervisor.

Prerequisites

  • Any virtual machines (VMs) that use the disk image must be shut down.
  • The source disk image format must be supported for conversion by QEMU. For a detailed list, see Supported disk image formats.

Procedure

  • Use the qemu-img convert command to convert an existing virtual disk image to a different format. For example, to convert a raw disk image to a QCOW2 disk image:

    # qemu-img convert -f raw <original-image.img> -O qcow2 <converted-image.qcow2>
    Copy to Clipboard

Verification

  1. Display information about the converted image and see if it has the intended format and size.

    # qemu-img info <converted-image.qcow2>
    
    image: converted-image.qcow2
    file format: qcow2
    virtual size: 30 GiB (32212254720 bytes)
    disk size: 196 KiB
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        compression type: zlib
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
        extended l2: false
    Copy to Clipboard
  2. Check the disk image for potential errors. for instructions, see Checking the consistency of a virtual disk.

12.8. Removing virtual machine storage by using the command line

If you no longer require a virtual disk attached to a virtual machine (VM), or if you want to free up host storage resources, you can use the command line to do any of the following:

  • Detach the virtual disk from the VM.
  • Delete the virtual disk and its content.
  • Deactivate the storage pool related to the virtual disk.
  • Delete the storage pool related to the virtual disk.

Procedure

  1. To detach a virtual disk from a VM, use the virsh detach-disk command.

    1. Optional: List all storage devices attached to the VM:

      # *virsh domblklist --details <vm-name>
      
       Type   Device   Target   Source
      -----------------------------------------------------------------------------
       file   disk     hda      /home/VirtualMachines/vm-name.qcow2
       file   cdrom    hdb      -
       file   disk     vdc      /home/VirtualMachines/test-disk2.qcow2
      Copy to Clipboard
    2. Use the target parameter to detach the disk. For example, to detach the disk connected to as vdc to the testguest VM, use the following command:

      # virsh detach-disk testguest vdc --persistent
      Copy to Clipboard
  2. To delete the disk, do one of the following:

    1. If the disk is managed as a storage volume, use the virsh vol-delete command. For example, to delete volume test-disk2 associated with storage pool RHEL-storage-pool:

      # virsh vol-delete --pool RHEL-storage-pool test-disk2
      Copy to Clipboard
    2. If the disk is purely file-based, remove the file.

      # rm /home/VirtualMachines/test-disk2.qcow2
      Copy to Clipboard
  3. To deactivate a storage pool, use the virsh pool-destroy command.

    When you deactivate a storage pool, no new volumes can be created in that pool. However, any VMs that have volumes in that pool will continue to run. This is useful, for example, if you want to limit the number of volumes that can be created in a pool to increase system performance.

    # virsh pool-destroy RHEL-storage-pool
    
    Pool RHEL-storage-pool destroyed
    Copy to Clipboard
  4. To completely remove a storage pool, delete its definition by using the virsh pool-undefine command.

    # virsh pool-undefine RHEL-storage-pool
    
    Pool RHEL-storage-pool has been undefined
    Copy to Clipboard

Verification

12.9. Removing virtual machine storage by using the web console

If you no longer require a virtual disk attached to a virtual machine (VM), or if you want to free up host storage resources, you can use the web console to do any of the following:

  • Detach the virtual disk from the VM.
  • Delete the virtual disk and its content.
  • Deactivate the storage pool related to the virtual disk.
  • Delete the storage pool related to the virtual disk.

Prerequisites

Procedure

  1. To detach a virtual disk from a VM, use the following steps:

    1. In the Virtual Machines interface, click the VM from which you want to detach a disk.

      A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

    2. Scroll to Disks.

      The Disks section displays information about the disks assigned to the VM, as well as options to Add or Edit disks.

    3. On the right side of the row for the disk that you want to detach, click the Menu button .
    4. In the drop-down menu that appears, click the Remove button.

      A Remove disk from VM? confirmation dialog box appears.

    5. In the confirmation dialog box, click Remove. Optionally, if you also want to remove the disk image, click Remove and delete file.

      The virtual disk is detached from the VM.

  2. To delete the disk, do one of the following:

    1. If the disk is managed as a storage volume, click Storage Pools at the top of the Virtual Machines tab. Click the name of the storage pool that contains the disk. Click Storage Volumes. Select the storage volume you want to remove. Click Delete 1 Volume.

      # virsh vol-delete --pool RHEL-storage-pool test-disk2
      Copy to Clipboard
    2. If the disk is a file not managed as a storage volume (for example if it was created by qemu-img), you must use a graphical file manager or the command line to delete it. The RHEL web console currently does not support deleting individual files.
  3. To deactivate a storage pool, use the following steps.

    When you deactivate a storage pool, no new volumes can be created in that pool. However, any VMs that have volumes in that pool will continue to run. This is useful, for example, if you want to limit the number of volumes that can be created in a pool to increase system performance.

    1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.
    2. Click Deactivate on the storage pool row.

      The storage pool is deactivated.

  4. To completely remove a storage pool, use the following steps:

    1. Click Storage Pools on the Virtual Machines tab.

      The Storage Pools window appears, showing a list of configured storage pools.

    2. Click the Menu button of the storage pool you want to delete and click Delete.

      A confirmation dialog appears.

    3. Optional: To delete the storage volumes inside the pool, select the corresponding check boxes in the dialog.
    4. Click Delete.

      The storage pool is deleted. If you selected the checkbox in the previous step, the associated storage volumes are deleted as well.

Verification

12.10. Supported disk image formats

To run a virtual machine (VM) on RHEL, you must use a disk image with a supported format. You can also convert certain unsupported disk images to a supported format.

Supported disk image formats for VMs

You can use disk images that use the following formats to run VMs in RHEL:

  • qcow2 - Provides certain additional features, such as compression.
  • raw - Might provide better performance.
  • luks - Disk images encrypted by using the Linux Unified Key Setup (LUKS) specification.

Supported disk image formats for conversion

  • If required, you can convert your disk images between the raw and qcow2 formats by using the qemu-img convert command.
  • If you require converting a vmdk disk image to a raw or qcow2 format, convert the VM that uses the disk to KVM by using the virt-v2v utility.
  • To convert other disk image formats to raw or qcow2, you can use the qemu-img convert command. For a list of formats that work with this command, see the QEMU documentation.

    Note that in most cases, converting the disk image format of a non-KVM virtual machine to qcow2 or raw is not sufficient for the VM to correctly run on RHEL KVM. In addition to converting the disk image, corresponding drivers must be installed and configured in the guest operating system of the VM. For supported hypervisor conversion, use the virt-v2v utility.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat