Chapter 2. Preparing Ceph Storage nodes for deployment
Red Hat Ceph Storage nodes are bare metal systems with IPMI power management. Director installs Red Hat Enterprise Linux on each node.
Director communicates with each node through the Provisioning network during the introspection and provisioning processes. All nodes connect to the Provisioning network through the native VLAN.
For more information about bare metal provisioning before overcloud deployment, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director guide.
For a complete guide to bare metal provisioning, see Configuring the Bare Metal Provisioning service.
2.1. Cleaning Ceph Storage node disks
Ceph Storage OSDs and journal partitions require factory clean disks. All data and metadata must be erased by the Bare Metal Provisioning service (ironic) from these disks before installing the Ceph OSD services.
You can configure director to delete all disk data and metadata by default by using the Bare Metal Provisioning service. When director is configured to perform this task, the Bare Metal Provisioning service performs an additional step to boot the nodes each time a node is set to available
.
The Bare Metal Provisioning service uses the wipefs --force --all
command. This command deletes all data and metadata on the disk but it does not perform a secure erase. A secure erase takes much longer.
Procedure
Open
/home/stack/undercloud.conf
and add the following parameter:clean_nodes=true
-
Save
/home/stack/undercloud.conf
. Update the undercloud configuration.
openstack undercloud install
2.2. Registering nodes
Register the nodes to enable communication with director.
Procedure
-
Create a node inventory JSON file in
/home/stack
. Enter hardware and power management details for each node.
For example:
{ "nodes":[ { "mac":[ "b1:b1:b1:b1:b1:b1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "b2:b2:b2:b2:b2:b2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "b3:b3:b3:b3:b3:b3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "c1:c1:c1:c1:c1:c1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" }, { "mac":[ "c2:c2:c2:c2:c2:c2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" }, { "mac":[ "c3:c3:c3:c3:c3:c3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" }, { "mac":[ "d1:d1:d1:d1:d1:d1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.211" }, { "mac":[ "d2:d2:d2:d2:d2:d2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.212" }, { "mac":[ "d3:d3:d3:d3:d3:d3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.213" } ] }
- Save the new file.
Initialize the stack user:
$ source ~/stackrc
Import the JSON inventory file into director and register nodes
$ openstack overcloud node import <inventory_file>
Replace
<inventory_file>
with the name of the file created in the first step.Assign the kernel and ramdisk images to each node:
$ openstack overcloud node configure <node>
2.3. Verifying available Red Hat Ceph Storage packages
Verify all required packages are available to avoid overcloud deployment failures.
2.3.1. Verifying cephadm package installation
Verify the cephadm
package is installed on at least one overcloud node. The cephadm
package is used to bootstrap the first node of the Ceph Storage cluster.
The cephadm
package is included in the overcloud-hardened-uefi-full.qcow2
image. The tripleo_cephadm
role uses the Ansible package module to ensure it is present in the image.
2.4. Defining the root disk for multi-disk Ceph clusters
Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process.
Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Properties that identify the root disk.
Procedure
Verify the disk information from the hardware introspection of each node:
(undercloud)$ openstack baremetal introspection data save <node_uuid> --file <output_file_name>
-
Replace
<node_uuid>
with the UUID of the node. Replace
<output_file_name>
with the name of the file that contains the output of the node introspection.For example, the data for one node might show three disks:
[ { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sda", "wwn_vendor_extension": "0x1ea4dcc412a9632b", "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b", "model": "PERC H330 Mini", "wwn": "0x61866da04f380700", "serial": "61866da04f3807001ea4dcc412a9632b" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdb", "wwn_vendor_extension": "0x1ea4e13c12e36ad6", "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6", "model": "PERC H330 Mini", "wwn": "0x61866da04f380d00", "serial": "61866da04f380d001ea4e13c12e36ad6" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdc", "wwn_vendor_extension": "0x1ea4e31e121cfb45", "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45", "model": "PERC H330 Mini", "wwn": "0x61866da04f37fc00", "serial": "61866da04f37fc001ea4e31e121cfb45" } ]
-
Replace
Set the root disk for the node by using a unique hardware property:
(undercloud)$ openstack baremetal node set --property root_device='{<property_value>}' <node-uuid>
-
Replace
<property_value>
with the unique hardware property value from the introspection data to use to set the root disk. Replace
<node_uuid>
with the UUID of the node.NoteA unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk:
(undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
-
Replace
- Configure the BIOS of each node to first boot from the network and then the root disk.
Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision
command, director provisions and writes the overcloud image to the root disk.
2.4.1. Properties that identify the root disk
There are several properties that you can define to help director identify the root disk:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name
(String): The name of the device, for example: /dev/sdb1.
Use the name
property for devices with persistent names. Do not use the name
property to set the root disk for devices that do not have persistent names because the value can change when the node boots.
2.5. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement
The default image for a Red Hat OpenStack Platform (RHOSP) deployment is overcloud-hardened-uefi-full.qcow2
. The overcloud-hardened-uefi-full.qcow2
image uses a valid Red Hat OpenStack Platform (RHOSP) subscription. You can use the overcloud-minimal
image when you do not want to consume your subscription entitlements, to avoid reaching the limit of your paid Red Hat subscriptions. This is useful, for example, when you want to provision nodes with only Ceph daemons, or when you want to provision a bare operating system (OS) where you do not want to run any other OpenStack services. For information about how to obtain the overcloud-minimal
image, see Obtaining images for overcloud nodes.
The overcloud-minimal
image supports only standard Linux bridges. The overcloud-minimal
image does not support Open vSwitch (OVS) because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement. OVS is not required to deploy Ceph Storage nodes. Use linux_bond
instead of ovs_bond
to define bonds.
Procedure
-
Open your
/home/stack/templates/overcloud-baremetal-deploy.yaml
file. Add or update the
image
property for the nodes that you want to use theovercloud-minimal
image. You can set the image toovercloud-minimal
on specific nodes, or for all nodes for a role.NoteThe overcloud minimal image is not a whole disk image. The kernel and ramdisk must be specified in the
/home/stack/templates/overcloud-baremetal-deploy.yaml
file.Specific nodes
- name: Ceph count: 3 instances: - hostname: overcloud-ceph-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd - hostname: overcloud-ceph-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd - hostname: overcloud-ceph-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd
All nodes for a specific role
- name: Ceph count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd instances: - hostname: overcloud-ceph-0 name: node00 - hostname: overcloud-ceph-1 name: node01 - hostname: overcloud-ceph-2 name: node02
In the
roles_data.yaml
role definition file, set therhsm_enforce
parameter toFalse
.rhsm_enforce: False
Run the provisioning command:
(undercloud)$ openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Pass the
overcloud-baremetal-deployed.yaml
environment file to theopenstack overcloud ceph deploy
command.
2.6. Designating nodes for Red Hat Ceph Storage
To designate nodes for Red Hat Ceph Storage, you must create a new role file to configure the CephStorage
role, and configure the bare metal nodes with a resource class for CephStorage
.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:[stack@director ~]$ source ~/stackrc
Generate a new roles data file named
roles_data.yaml
that includes theController
,Compute
, andCephStorage
roles:(undercloud)$ openstack overcloud roles \ generate Controller Compute CephStorage -o /home/stack/templates/roles_data.yaml \
Open
roles_data.yaml
and ensure it has the following parameters and sections:Section/Parameter Value Role comment
Role: CephStorage
Role name
name: CephStorage
description
Ceph node role
HostnameFormatDefault
%stackname%-novaceph-%index%
deprecated_nic_config_name
ceph.yaml
- Register the Ceph nodes for the overcloud by adding them to your node definition template.
Inspect the node hardware:
(undercloud)$ openstack overcloud node introspect --all-manageable --provide
Tag each bare metal node that you want to designate for Ceph with a custom Ceph resource class:
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.CEPH <node>
Replace
<node>
with the ID of the bare metal node.Add the
CephStorage
role to yourovercloud-baremetal-deploy.yaml
file, and define any predictive node placements, resource classes, or other attributes that you want to assign to your nodes:- name: Controller count: 3 - name: Compute count: 3 - name: CephStorage count: 5 defaults: resource_class: baremetal.CEPH
Run the provisioning command:
(undercloud)$ openstack overcloud node provision \ --stack stack \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
available
toactive
:(undercloud)$ watch openstack baremetal node list
Additional resources
- For more information on node registration, see Section 2.2, “Registering nodes”.
- For more information inspecting node hardware, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.