Chapter 6. Configuring bare-metal nodes to enable the creation of bare-metal instances from a bootable volume
This feature is deprecated in Red Hat OpenStack Platform 17.0. Bug fixes and support are provided in RHOSP 17.0, but no new feature enhancements will be made.
You can create volumes in the Block Storage service (cinder) and connect these volumes to bare-metal instances that you create with the Bare Metal Provisioning service (ironic).
To enable your cloud users to create bare-metal instances from bootable volumes, complete the following tasks:
- Configure each bare-metal node to enable the launching of bare-metal instances from a bootable volume.
- Configure iSCSI kernel parameters on the boot disk.
6.1. Prerequisites
The Bare Metal Provisioning service (ironic) connects bare-metal nodes to block storage volumes through an iSCSI interface. Therefore, the overcloud must be deployed with an iSCSI backend for the Block Storage service (cinder). To enable an iSCSI backend for the Block Storage service, set the
CinderEnableIscsiBackend
parameter totrue
and deploy the overcloud.NoteYou cannot use the Block Storage volume boot feature with a Red Hat Ceph Storage backend.
6.2. Configuring nodes to create bare-metal instances from a bootable volume
You must configure each bare-metal node to enable it to provide the ability to launch bare-metal instances from a bootable volume.
Procedure
Source your overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
Set the
iscsi_boot
capability totrue
for each bare-metal node:$ openstack baremetal node set --property capabilities=iscsi_boot:true <node_uuid>
-
Replace
<node_uuid>
with the ID of the bare-metal node.
-
Replace
Set the
storage-interface
tocinder
for each bare-metal node:$ openstack baremetal node set --storage-interface cinder <node_uuid>
Create an iSCSI connector for the node:
$ openstack baremetal volume connector create --node <node_uuid> \ --type iqn --connector-id <connector_id>
-
Replace
<connector_id>
with a unique ID for each node, for example,iqn.2010-10.org.openstack.node<NUM>
, where<NUM>
is an incremented number for each node.
-
Replace
6.3. Configuring iSCSI kernel parameters on the boot disk
You must configure the instance image to enable iSCSI booting in the kernel.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
- Download a Red Hat Enterprise Linux KVM image in QCOW2 format from the Red Hat Enterprise Linux Product Software download page.
-
Copy the image to the
/home/stack/
directory on the undercloud. Mount the QCOW2 image and access it as the
root
user:Load the
nbd
kernel module:$ sudo modprobe nbd
Connect the QCOW image as
/dev/nbd0
:$ sudo qemu-nbd --connect=/dev/nbd0 <image>
Check the partitions on the NBD:
$ sudo fdisk /dev/nbd0 -l
New Red Hat Enterprise Linux QCOW2 images contain only one partition, which is usually named
/dev/nbd0p1
on the NBD.Create a mount point for the image:
$ mkdir /tmp/mountpoint
Mount the image:
$ sudo mount /dev/nbd0p1 /tmp/mountpoint/
Mount your
dev
directory so that the image has access to device information on the host:$ sudo mount -o bind /dev /tmp/mountpoint/dev
Change the root directory to the mount point:
$ sudo chroot /tmp/mountpoint /bin/bash
Configure iSCSI on the image:
NoteSome commands in this step might report the following error:
lscpu: cannot open /proc/cpuinfo: No such file or directory
This error is not critical and you can ignore the error.
Move the
resolv.conf
file to a temporary location:# mv /etc/resolv.conf /etc/resolv.conf.bak
Create a temporary
resolv.conf
file to resolve DNS requests for the Red Hat Content Delivery Network. This example uses8.8.8.8
for the nameserver:# echo "nameserver 8.8.8.8" > /etc/resolv.conf
Register the mounted image to the Red Hat Content Delivery Network:
# subscription-manager register
Enter your user name and password when the command prompts you.
Attach a subscription that contains Red Hat Enterprise Linux:
# subscription-manager list --all --available # subscription-manager attach --pool <POOLID>
Substitute
<POOLID>
with the pool ID of the subscription.Disable the default repositories:
# subscription-manager repos --disable "*"
Enable the Red Hat Enterprise Linux repository:
Red Hat Enterprise Linux 7:
# subscription-manager repos --enable "rhel-7-server-rpms"
Red Hat Enterprise Linux 8:
# subscription-manager repos --enable "rhel-8-for-x86_64-baseos-eus-rpms"
Install the
iscsi-initiator-utils
package:# yum install -y iscsi-initiator-utils
Unregister the mounted image:
# subscription-manager unregister
Restore the original
resolv.conf
file:# mv /etc/resolv.conf.bak /etc/resolv.conf
Check the kernel version on the mounted image:
# rpm -qa kernel
For example, if the output is
kernel-3.10.0-1062.el7.x86_64
, the kernel version is3.10.0-1062.el7.x86_64
. Note this kernel version for the next step.NoteNew Red Hat Enterprise Linux QCOW2 images have only one kernel version installed. If more than one kernel version is installed, use the latest one.
Add the
network
andiscsi
dracut modules to the initramfs image:# dracut --force --add "network iscsi" /boot/initramfs-<KERNELVERSION>.img <KERNELVERSION>
Replace
<KERNELVERSION>
with the version number that you obtained fromrpm -qa kernel
. The following example uses3.10.0-1062.el7.x86_64
as the kernel version:# dracut --force --add "network iscsi" /boot/initramfs-3.10.0-1062.el7.x86_64.img 3.10.0-1062.el7.x86_64
Exit from the mounted image back to your host operating system:
# exit
Unmount the image:
Unmount the
dev
directory from the temporary mount point:$ sudo umount /tmp/mountpoint/dev
Unmount the image from the mount point:
$ sudo umount /tmp/mountpoint
Disconnect the QCOW2 image from
/dev/nbd0/
:$ sudo qemu-nbd --disconnect /dev/nbd0
Rebuild the
grub
menu configuration on the image:Install the
libguestfs-tools
package:$ sudo yum -y install libguestfs-tools
ImportantIf you install the
libguestfs-tools
package on the undercloud, disableiscsid.socket
to avoid port conflicts with thetripleo_iscsid
service on the undercloud:$ sudo systemctl disable --now iscsid.socket
Set the
libguestfs
backend to use QEMU directly:$ export LIBGUESTFS_BACKEND=direct
Update the grub configuration on the image and set the
rd.iscsi.firmware=1
kernel parameter on the boot disk:$ guestfish -a /tmp/images/{{ dib_image }} -m /dev/sda3 sh "mount /dev/sda2 /boot/efi && rm /boot/grub2/grubenv && /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg && cp /boot/grub2/grub.cfg /boot/efi/EFI/redhat/grub.cfg && grubby --update-kernel=ALL --args=\"rd.iscsi.firmware=1\" && cp /boot/grub2/grubenv /boot/efi/EFI/redhat/grubenv && echo Success"
Upload the iSCSI-enabled image to the Image service (glance):
$ openstack image create --disk-format qcow2 --container-format bare \ --file <image> <image_name>
-
Replace
<image>
with the name of the iSCSI-enabled image, for example,rhel-server-7.7-x86_64-kvm.qcow2
. -
Replace
<image_ref>
with a name to use to reference the image, for example,rhel-server-7.7-iscsi
.
-
Replace
6.4. Creating a bare-metal instance from a bootable volume
To verify that the bare-metal node can host bare-metal instances created from a bootable volume, create the bootable volume and launch an instance.
Procedure
Source your overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
Create a volume from the iSCSI-enabled instance image:
$ openstack volume create --size 10 --image <image_ref> --bootable myBootableVolume
-
Replace
<image_ref>
with the name or ID of the image to write to the volume, for example,rhel-server-7.7-iscsi
.
-
Replace
Create a bare-metal instance that uses the boot volume:
$ openstack server create --flavor baremetal --volume myBootableVolume --key default myBareMetalInstance