Este contenido no está disponible en el idioma seleccionado.

Chapter 6. Booting from cinder volumes


You can create volumes in the Block Storage service (cinder) and connect these volumes to bare metal instances that you create with the Bare Metal Provisioning service (ironic).

6.1. Cinder volume boot for bare metal nodes

You can boot bare metal nodes from a block storage device that is stored in OpenStack Block Storage (cinder). OpenStack Bare Metal (ironic) connects bare metal nodes to volumes through an iSCSI interface.

Ironic enables this feature during the overcloud deployment. However, consider the following conditions before you deploy the overcloud:

  • The overcloud requires the cinder iSCSI backend to be enabled. Set the CinderEnableIscsiBackend heat parameter to true during overcloud deployment.
  • You cannot use the cinder volume boot feature with a Red Hat Ceph Storage backend.
  • You must set the rd.iscsi.firmware=1 kernel parameter on the boot disk.

6.2. Configuring nodes for cinder volume boot

You must configure certain options for each bare metal node to successfully boot from a cinder volume.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    Copy to Clipboard Toggle word wrap
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Set the iscsi_boot capability to true and the storage-interface to cinder for the selected node:

    $ openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>
    Copy to Clipboard Toggle word wrap

    Replace <NODEID> with the ID of the chosen node.

  3. Create an iSCSI connector for the node:

    $ openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>
    Copy to Clipboard Toggle word wrap

    The connector ID for each node must be unique. In this example, the connector is iqn.2010-10.org.openstack.node<NUM> where <NUM> is an incremented number for each node.

6.3. Configuring iSCSI kernel parameters on the boot disk

You must enable the iSCSI booting in the kernel on the image. To accomplish this, mount the QCOW2 image and enable iSCSI components on the image.

Prerequisites

  1. Download a Red Hat Enterprise Linux QCOW2 image and copy it to the /home/stack/ directory on the undercloud. You can download Red Hat Enterprise Linux KVM images in QCOW2 format from the following pages:

Procedure

  1. Log in to the undercloud as the stack user.
  2. Mount the QCOW2 image and access it as the root user:

    1. Load the nbd kernel module:

      $ sudo modprobe nbd
      Copy to Clipboard Toggle word wrap
    2. Connect the QCOW image as /dev/nbd0:

      $ sudo qemu-nbd --connect=/dev/nbd0 <IMAGE>
      Copy to Clipboard Toggle word wrap
    3. Check the partitions on the NBD:

      $ sudo fdisk /dev/nbd0 -l
      Copy to Clipboard Toggle word wrap

      New Red Hat Enterprise Linux QCOW2 images contain only one partition, which is usually named /dev/nbd0p1 on the NBD.

    4. Create a mount point for the image:

      mkdir /tmp/mountpoint
      Copy to Clipboard Toggle word wrap
    5. Mount the image:

      sudo mount /dev/nbd0p1 /tmp/mountpoint/
      Copy to Clipboard Toggle word wrap
    6. Mount your dev directory so that the image has access to device information on the host:

      sudo mount -o bind /dev /tmp/mountpoint/dev
      Copy to Clipboard Toggle word wrap
    7. Change the root directory to the mount point:

      sudo chroot /tmp/mountpoint /bin/bash
      Copy to Clipboard Toggle word wrap
  3. Configure iSCSI on the image:

    Note

    Some commands in this step might report the following error:

    lscpu: cannot open /proc/cpuinfo: No such file or directory
    Copy to Clipboard Toggle word wrap

    This error is not critical and you can ignore the error.

    1. Move the resolv.conf file to a temporary location:

      # mv /etc/resolv.conf /etc/resolv.conf.bak
      Copy to Clipboard Toggle word wrap
    2. Create a temporary resolv.conf file to resolve DNS requests for the Red Hat Content Delivery Network. This example uses 8.8.8.8 for the nameserver:

      # echo "nameserver 8.8.8.8" > /etc/resolv.conf
      Copy to Clipboard Toggle word wrap
    3. Register the mounted image to the Red Hat Content Delivery Network:

      # subscription-manager register
      Copy to Clipboard Toggle word wrap

      Enter your user name and password when the command prompts you.

    4. Attach a subscription that contains Red Hat Enterprise Linux:

      # subscription-manager list --all --available
      # subscription-manager attach --pool <POOLID>
      Copy to Clipboard Toggle word wrap

      Substitute <POOLID> with the pool ID of the subscription.

    5. Disable the default repositories:

      # subscription-manager repos --disable "*"
      Copy to Clipboard Toggle word wrap
    6. Enable the Red Hat Enterprise Linux repository:

      • Red Hat Enterprise Linux 7:

        # subscription-manager repos --enable "rhel-7-server-rpms"
        Copy to Clipboard Toggle word wrap
      • Red Hat Enterprise Linux 8:

        # subscription-manager repos --enable "rhel-8-for-x86_64-baseos-eus-rpms"
        Copy to Clipboard Toggle word wrap
    7. Install the iscsi-initiator-utils package:

      # yum install -y iscsi-initiator-utils
      Copy to Clipboard Toggle word wrap
    8. Unregister the mounted image:

      # subscription-manager unregister
      Copy to Clipboard Toggle word wrap
    9. Restore the original resolv.conf file:

      # mv /etc/resolv.conf.bak /etc/resolv.conf
      Copy to Clipboard Toggle word wrap
    10. Check the kernel version on the mounted image:

      # rpm -qa kernel
      Copy to Clipboard Toggle word wrap

      For example, if the output is kernel-3.10.0-1062.el7.x86_64, the kernel version is 3.10.0-1062.el7.x86_64. Note this kernel version for the next step.

      Note

      New Red Hat Enterprise Linux QCOW2 images have only one kernel version installed. If more than one kernel version is installed, use the latest one.

    11. Add the network and iscsi dracut modules to the initramfs image:

      # dracut --force --add "network iscsi" /boot/initramfs-<KERNELVERSION>.img <KERNELVERSION>
      Copy to Clipboard Toggle word wrap

      Replace <KERNELVERSION> with the version number that you obtained from rpm -qa kernel. The following example uses 3.10.0-1062.el7.x86_64 as the kernel version:

      # dracut --force --add "network iscsi" /boot/initramfs-3.10.0-1062.el7.x86_64.img 3.10.0-1062.el7.x86_64
      Copy to Clipboard Toggle word wrap
    12. Exit from the mounted image back to your host operating system:

      # exit
      Copy to Clipboard Toggle word wrap
  4. Unmount the image:

    1. Unmount the dev directory from the temporary mount point:

      $ sudo umount /tmp/mountpoint/dev
      Copy to Clipboard Toggle word wrap
    2. Unmount the image from the mount point:

      $ sudo umount /tmp/mountpoint
      Copy to Clipboard Toggle word wrap
    3. Disconnect the QCOW2 image from /dev/nbd0/:

      $ sudo qemu-nbd --disconnect /dev/nbd0
      Copy to Clipboard Toggle word wrap
  5. Rebuild the grub menu configuration on the image:

    1. Install the libguestfs-tools package:

      $ sudo yum -y install libguestfs-tools
      Copy to Clipboard Toggle word wrap
      Important

      If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud:

      $ sudo systemctl disable --now iscsid.socket
      Copy to Clipboard Toggle word wrap
    2. Set the libguestfs backend to use QEMU directly:

      $ export LIBGUESTFS_BACKEND=direct
      Copy to Clipboard Toggle word wrap
    3. Update the grub configuration on the image:

      $ guestfish -a /tmp/images/{{ dib_image }} -m /dev/sda3 sh "mount /dev/sda2 /boot/efi && rm /boot/grub2/grubenv && /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg && cp /boot/grub2/grub.cfg /boot/efi/EFI/redhat/grub.cfg && grubby --update-kernel=ALL --args=\"rd.iscsi.firmware=1\" && cp /boot/grub2/grubenv /boot/efi/EFI/redhat/grubenv && echo Success"
      Copy to Clipboard Toggle word wrap

6.4. Creating and using a boot volume in cinder

You must upload the iSCSI-enabled image to OpenStack Image Storage (glance) and create the boot volume in OpenStack Block Storage (cinder).

Procedure

  1. Log in to the undercloud as the stack user.
  2. Upload the iSCSI-enabled image to glance:

    $ openstack image create --disk-format qcow2 --container-format bare --file rhel-server-7.7-x86_64-kvm.qcow2 rhel-server-7.7-iscsi
    Copy to Clipboard Toggle word wrap
  3. Create a volume from the image:

    $ openstack volume create --size 10 --image rhel-server-7.7-iscsi --bootable rhel-test-volume
    Copy to Clipboard Toggle word wrap
  4. Create a bare metal instance that uses the boot volume in cinder:

    $ openstack server create --flavor baremetal --volume rhel-test-volume --key default rhel-test
    Copy to Clipboard Toggle word wrap
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat