Chapter 6. Booting from cinder volumes


You can create volumes in the Block Storage service (cinder) and connect these volumes to bare metal instances that you create with the Bare Metal Provisioning service (ironic).

6.1. Cinder volume boot for bare metal nodes

You can boot bare metal nodes from a block storage device that is stored in OpenStack Block Storage (cinder). OpenStack Bare Metal (ironic) connects bare metal nodes to volumes through an iSCSI interface.

Ironic enables this feature during the overcloud deployment. However, consider the following conditions before you deploy the overcloud:

  • The overcloud requires the cinder iSCSI backend to be enabled. Set the CinderEnableIscsiBackend heat parameter to true during overcloud deployment.
  • You cannot use the cinder volume boot feature with a Red Hat Ceph Storage backend.
  • You must set the rd.iscsi.firmware=1 kernel parameter on the boot disk.

6.2. Configuring nodes for cinder volume boot

You must configure certain options for each bare metal node to successfully boot from a cinder volume.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the overcloud credentials:

    $ source ~/overcloudrc
  3. Set the iscsi_boot capability to true and the storage-interface to cinder for the selected node:

    $ openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>

    Replace <NODEID> with the ID of the chosen node.

  4. Create an iSCSI connector for the node:

    $ openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>

    The connector ID for each node must be unique. In this example, the connector is iqn.2010-10.org.openstack.node<NUM> where <NUM> is an incremented number for each node.

6.3. Configuring iSCSI kernel parameters on the boot disk

You must enable the iSCSI booting in the kernel on the image. To accomplish this, mount the QCOW2 image and enable iSCSI components on the image.

Prerequisites

  1. Download a Red Hat Enterprise Linux QCOW2 image and copy it to the /home/stack/ directory on the undercloud. You can download Red Hat Enterprise Linux KVM images in QCOW2 format from the following pages:

Procedure

  1. Log in to the undercloud as the stack user.
  2. Mount the QCOW2 image and access it as the root user:

    1. Load the nbd kernel module:

      $ sudo modprobe nbd
    2. Connect the QCOW image as /dev/nbd0:

      $ sudo qemu-nbd --connect=/dev/nbd0 <IMAGE>
    3. Check the partitions on the NBD:

      $ sudo fdisk /dev/nbd0 -l

      New Red Hat Enterprise Linux QCOW2 images contain only one partition, which is usually named /dev/nbd0p1 on the NBD.

    4. Create a mount point for the image:

      mkdir /tmp/mountpoint
    5. Mount the image:

      sudo mount /dev/nbd0p1 /tmp/mountpoint/
    6. Mount your dev directory so that the image has access to device information on the host:

      sudo mount -o bind /dev /tmp/mountpoint/dev
    7. Change the root directory to the mount point:

      sudo chroot /tmp/mountpoint /bin/bash
  3. Configure iSCSI on the image:

    Note

    Some commands in this step might report the following error:

    lscpu: cannot open /proc/cpuinfo: No such file or directory

    This error is not critical and you can ignore the error.

    1. Move the resolv.conf file to a temporary location:

      # mv /etc/resolv.conf /etc/resolv.conf.bak
    2. Create a temporary resolv.conf file to resolve DNS requests for the Red Hat Content Delivery Network. This example uses 8.8.8.8 for the nameserver:

      # echo "nameserver 8.8.8.8" > /etc/resolv.conf
    3. Register the mounted image to the Red Hat Content Delivery Network:

      # subscription-manager register

      Enter your user name and password when the command prompts you.

    4. Attach a subscription that contains Red Hat Enterprise Linux:

      # subscription-manager list --all --available
      # subscription-manager attach --pool <POOLID>

      Substitute <POOLID> with the pool ID of the subscription.

    5. Disable the default repositories:

      # subscription-manager repos --disable "*"
    6. Enable the Red Hat Enterprise Linux repository:

      • Red Hat Enterprise Linux 7:

        # subscription-manager repos --enable "rhel-7-server-rpms"
      • Red Hat Enterprise Linux 8:

        # subscription-manager repos --enable "rhel-8-for-x86_64-baseos-eus-rpms"
    7. Install the iscsi-initiator-utils package:

      # yum install -y iscsi-initiator-utils
    8. Unregister the mounted image:

      # subscription-manager unregister
    9. Restore the original resolv.conf file:

      # mv /etc/resolv.conf.bak /etc/resolv.conf
    10. Check the kernel version on the mounted image:

      # rpm -qa kernel

      For example, if the output is kernel-3.10.0-1062.el7.x86_64, the kernel version is 3.10.0-1062.el7.x86_64. Note this kernel version for the next step.

      Note

      New Red Hat Enterprise Linux QCOW2 images have only one kernel version installed. If more than one kernel version is installed, use the latest one.

    11. Add the network and iscsi dracut modules to the initramfs image:

      # dracut --force --add "network iscsi" /boot/initramfs-<KERNELVERSION>.img <KERNELVERSION>

      Replace <KERNELVERSION> with the version number that you obtained from rpm -qa kernel. The following example uses 3.10.0-1062.el7.x86_64 as the kernel version:

      # dracut --force --add "network iscsi" /boot/initramfs-3.10.0-1062.el7.x86_64.img 3.10.0-1062.el7.x86_64
    12. Exit from the mounted image back to your host operating system:

      # exit
  4. Unmount the image:

    1. Unmount the dev directory from the temporary mount point:

      $ sudo umount /tmp/mountpoint/dev
    2. Unmount the image from the mount point:

      $ sudo umount /tmp/mountpoint
    3. Disconnect the QCOW2 image from /dev/nbd0/:

      $ sudo qemu-nbd --disconnect /dev/nbd0
  5. Rebuild the grub menu configuration on the image:

    1. Install the libguestfs-tools package:

      $ sudo yum -y install libguestfs-tools
      Important

      If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud:

      $ sudo systemctl disable --now iscsid.socket
    2. Set the libguestfs backend to use QEMU directly:

      $ export LIBGUESTFS_BACKEND=direct
    3. Update the grub configuration on the image:

      $ guestfish -a /tmp/images/{{ dib_image }} -m /dev/sda3 sh "mount /dev/sda2 /boot/efi && rm /boot/grub2/grubenv && /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg && cp /boot/grub2/grub.cfg /boot/efi/EFI/redhat/grub.cfg && grubby --update-kernel=ALL --args=\"rd.iscsi.firmware=1\" && cp /boot/grub2/grubenv /boot/efi/EFI/redhat/grubenv && echo Success"

6.4. Creating and using a boot volume in cinder

You must upload the iSCSI-enabled image to OpenStack Image Storage (glance) and create the boot volume in OpenStack Block Storage (cinder).

Procedure

  1. Log in to the undercloud as the stack user.
  2. Upload the iSCSI-enabled image to glance:

    $ openstack image create --disk-format qcow2 --container-format bare --file rhel-server-7.7-x86_64-kvm.qcow2 rhel-server-7.7-iscsi
  3. Create a volume from the image:

    $ openstack volume create --size 10 --image rhel-server-7.7-iscsi --bootable rhel-test-volume
  4. Create a bare metal instance that uses the boot volume in cinder:

    $ openstack server create --flavor baremetal --volume rhel-test-volume --key default rhel-test
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.