Chapter 7. Booting from cinder volumes
This section contains information on creating and connecting volumes created in OpenStack Block Storage (cinder) to bare metal instances created with OpenStack Bare Metal (ironic).
7.1. Cinder volume boot for bare metal nodes Copy linkLink copied to clipboard!
You can boot bare metal nodes from a block storage device that is stored in OpenStack Block Storage (cinder). OpenStack Bare Metal (ironic) connects bare metal nodes to volumes through an iSCSI interface.
Ironic enables this feature during the overcloud deployment. However, consider the following conditions prior to deployment:
-
The overcloud requires the cinder iSCSI backend to be enabled. Set the
CinderEnableIscsiBackend
heat parameter totrue
during overcloud deployment. - You cannot use the cinder volume boot feature with a Red Hat Ceph Storage backend.
-
You must set the
rd.iscsi.firmware=1
kernel parameter on the boot disk.
7.2. Configuring nodes for cinder volume boot Copy linkLink copied to clipboard!
You must configure certain options for each bare metal node to successfully boot from a cinder volume.
Procedure
-
Log in to the undercloud as the
stack
user. Source the overcloud credentials:
source ~/overcloudrc
$ source ~/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
iscsi_boot
capability totrue
and thestorage-interface
tocinder
for the selected node:openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>
$ openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace `<NODEID> with the ID of the chosen node.
Create an iSCSI connector for the node:
openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>
$ openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The connector ID for each node must be unique. In the example, the connector is
iqn.2010-10.org.openstack.node<NUM>
where<NUM>
is an incremented number for each node.
7.3. Configuring iSCSI kernel parameters on the boot disk Copy linkLink copied to clipboard!
You must enable the iSCSI booting in the kernel on the image. To accomplish this, mount the QCOW2 image and enable iSCSI components on the image.
Prerequisites
Download a Red Hat Enterprise Linux QCOW2 image and copy it to the
/home/stack/
directory on the undercloud. You can download Red Hat Enterprise Linux KVM images in QCOW2 format from the following pages:
Procedure
-
Log in to the undercloud as the
stack
user. Mount the QCOW2 image and access it as the
root
user:Load the
nbd
kernel module:sudo modprobe nbd
$ sudo modprobe nbd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect the QCOW image as
/dev/nbd0
:sudo qemu-nbd --connect=/dev/nbd0 <IMAGE>
$ sudo qemu-nbd --connect=/dev/nbd0 <IMAGE>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the partitions on the NBD:
sudo fdisk /dev/nbd0 -l
$ sudo fdisk /dev/nbd0 -l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow New Red Hat Enterprise Linux QCOW2 images contain only one partition, which is usually named
/dev/nbd0p1
on the NBD.Create a mount point for the image:
mkdir /tmp/mountpoint
mkdir /tmp/mountpoint
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the image:
sudo mount /dev/nbd0p1 /tmp/mountpoint/
sudo mount /dev/nbd0p1 /tmp/mountpoint/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount your
dev
directory so that the image has access to device information on the host:sudo mount -o bind /dev /tmp/mountpoint/dev
sudo mount -o bind /dev /tmp/mountpoint/dev
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the root directory to the mount point:
sudo chroot /tmp/mountpoint /bin/bash
sudo chroot /tmp/mountpoint /bin/bash
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure iSCSI on the image:
NoteSome commands in this step might report the following error:
lscpu: cannot open /proc/cpuinfo: No such file or directory
lscpu: cannot open /proc/cpuinfo: No such file or directory
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This error is not critical and you can ignore the error.
Move the
resolv.conf
file to a temporary location:mv /etc/resolv.conf /etc/resolv.conf.bak
# mv /etc/resolv.conf /etc/resolv.conf.bak
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary
resolv.conf
file to resolve DNS requests for the Red Hat Content Delivery Network. This example uses8.8.8.8
for the nameserver:echo "nameserver 8.8.8.8" > /etc/resolv.conf
# echo "nameserver 8.8.8.8" > /etc/resolv.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the mounted image to the Red Hat Content Delivery Network:
subscription-manager register
# subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter your username and password when the command prompts you.
Attach a subscription that contains Red Hat Enterprise Linux:
subscription-manager list --all --available subscription-manager attach --pool <POOLID>
# subscription-manager list --all --available # subscription-manager attach --pool <POOLID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Substitute
<POOLID>
with the pool ID of the subscription.Disable the default repositories:
subscription-manager repos --disable "*"
# subscription-manager repos --disable "*"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Red Hat Enterprise Linux repository:
Red Hat Enterprise Linux 7:
subscription-manager repos --enable "rhel-7-server-rpms"
# subscription-manager repos --enable "rhel-7-server-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8:
subscription-manager repos --enable "rhel-8-for-x86_64-baseos-rpms"
# subscription-manager repos --enable "rhel-8-for-x86_64-baseos-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the
iscsi-initiator-utils
package:yum install -y iscsi-initiator-utils
# yum install -y iscsi-initiator-utils
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unregister the mounted image:
subscription-manager unregister
# subscription-manager unregister
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the original
resolv.conf
file:mv /etc/resolv.conf.bak /etc/resolv.conf
# mv /etc/resolv.conf.bak /etc/resolv.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the kernel version on the mounted image:
rpm -qa kernel
# rpm -qa kernel
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, if the output is
kernel-3.10.0-1062.el7.x86_64
, the kernel version is3.10.0-1062.el7.x86_64
. Note this kernel version for the next step.NoteNew Red Hat Enterprise Linux QCOW2 image have only one kernel version installed. If more than one kernel version is installed, use the latest one.
Add the
network
andiscsi
dracut modules to the initramfs image:dracut --force --add "network iscsi" /boot/initramfs-<KERNELVERSION>.img <KERNELVERSION>
# dracut --force --add "network iscsi" /boot/initramfs-<KERNELVERSION>.img <KERNELVERSION>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<KERNELVERSION>
with the version you obtained fromrpm -qa kernel
. The following example uses3.10.0-1062.el7.x86_64
as the kernel version:dracut --force --add "network iscsi" /boot/initramfs-3.10.0-1062.el7.x86_64.img 3.10.0-1062.el7.x86_64
# dracut --force --add "network iscsi" /boot/initramfs-3.10.0-1062.el7.x86_64.img 3.10.0-1062.el7.x86_64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/etc/default/grub
configuration file and addrd.iscsi.firmware=1
to theGRUB_CMDLINE_LINUX
parameter:vi /etc/default/grub
# vi /etc/default/grub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows the
GRUB_CMDLINE_LINUX
parameter with the addedrd.iscsi.firmware=1
kernel argument:GRUB_CMDLINE_LINUX="console=tty0 crashkernel=auto console=ttyS0,115200n8 no_timer_check net.ifnames=0 rd.iscsi.firmware=1"
GRUB_CMDLINE_LINUX="console=tty0 crashkernel=auto console=ttyS0,115200n8 no_timer_check net.ifnames=0 rd.iscsi.firmware=1"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save these changes.
NoteDo not rebuild the
grub
menu configuration at this step. A later step in this procedure rebuilds thegrub
menu as a temporary virtual machine.Exit from the mounted image back to your host operating system:
exit
# exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Unmount the image:
Unmount the
dev
directory from the temporary mount point:sudo umount /tmp/mountpoint/dev
$ sudo umount /tmp/mountpoint/dev
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount the image from the mount point:
sudo umount /tmp/mountpoint
$ sudo umount /tmp/mountpoint
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disconnect the QCOW2 image from
/dev/nbd0/
:sudo qemu-nbd --disconnect /dev/nbd0
$ sudo qemu-nbd --disconnect /dev/nbd0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Rebuild the
grub
menu configuration on the image:Install the
libguestfs-tools
package:sudo yum -y install libguestfs-tools
$ sudo yum -y install libguestfs-tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you install the
libguestfs-tools
package on the undercloud, disableiscsid.socket
to avoid port conflicts with thetripleo_iscsid
service on the undercloud:sudo systemctl disable --now iscsid.socket
$ sudo systemctl disable --now iscsid.socket
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
libguestfs
backend to use QEMU directly:export LIBGUESTFS_BACKEND=direct
$ export LIBGUESTFS_BACKEND=direct
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the grub configuration on the image:
guestfish -a <IMAGE> -m /dev/sda1 sh "/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg"
$ guestfish -a <IMAGE> -m /dev/sda1 sh "/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Creating and using a boot volume in cinder Copy linkLink copied to clipboard!
You must upload the iSCSI-enabled image to OpenStack Image Storage (glance) and create the boot volume in OpenStack Block Storage (cinder).
Procedure
-
Log in to the undercloud as the
stack
user. Upload the iSCSI-enabled image to glance:
openstack image create --disk-format qcow2 --container-format bare --file rhel-server-7.7-x86_64-kvm.qcow2 rhel-server-7.7-iscsi
$ openstack image create --disk-format qcow2 --container-format bare --file rhel-server-7.7-x86_64-kvm.qcow2 rhel-server-7.7-iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a volume from the image:
openstack volume create --size 10 --image rhel-server-7.7-iscsi --bootable rhel-test-volume
$ openstack volume create --size 10 --image rhel-server-7.7-iscsi --bootable rhel-test-volume
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a bare metal instance that uses the boot volume in cinder:
openstack server create --flavor baremetal --volume rhel-test-volume --key default rhel-test
$ openstack server create --flavor baremetal --volume rhel-test-volume --key default rhel-test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow