Chapter 2. Deploying HCI hardware
This section contains procedures and information about the preparation and configuration of hyperconverged nodes.
Prerequisites
- You have read Deploying an overcloud and Red Hat Ceph Storage in Deploying Red Hat Ceph and OpenStack together with director.
2.1. Cleaning Ceph Storage node disks
Ceph Storage OSDs and journal partitions require factory clean disks. All data and metadata must be erased by the Bare Metal Provisioning service (ironic) from these disks before installing the Ceph OSD services.
You can configure director to delete all disk data and metadata by default by using the Bare Metal Provisioning service. When director is configured to perform this task, the Bare Metal Provisioning service performs an additional step to boot the nodes each time a node is set to available
.
The Bare Metal Provisioning service uses the wipefs --force --all
command. This command deletes all data and metadata on the disk but it does not perform a secure erase. A secure erase takes much longer.
Procedure
Open
/home/stack/undercloud.conf
and add the following parameter:clean_nodes=true
clean_nodes=true
Copy to Clipboard Copied! -
Save
/home/stack/undercloud.conf
. Update the undercloud configuration.
openstack undercloud install
openstack undercloud install
Copy to Clipboard Copied!
2.2. Registering nodes
Register the nodes to enable communication with director.
Procedure
-
Create a node inventory JSON file in
/home/stack
. Enter hardware and power management details for each node.
For example:
{ "nodes":[ { "mac":[ "b1:b1:b1:b1:b1:b1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "b2:b2:b2:b2:b2:b2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "b3:b3:b3:b3:b3:b3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "c1:c1:c1:c1:c1:c1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" }, { "mac":[ "c2:c2:c2:c2:c2:c2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" }, { "mac":[ "c3:c3:c3:c3:c3:c3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" }, { "mac":[ "d1:d1:d1:d1:d1:d1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.211" }, { "mac":[ "d2:d2:d2:d2:d2:d2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.212" }, { "mac":[ "d3:d3:d3:d3:d3:d3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.213" } ] }
{ "nodes":[ { "mac":[ "b1:b1:b1:b1:b1:b1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "b2:b2:b2:b2:b2:b2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "b3:b3:b3:b3:b3:b3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "c1:c1:c1:c1:c1:c1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" }, { "mac":[ "c2:c2:c2:c2:c2:c2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" }, { "mac":[ "c3:c3:c3:c3:c3:c3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" }, { "mac":[ "d1:d1:d1:d1:d1:d1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.211" }, { "mac":[ "d2:d2:d2:d2:d2:d2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.212" }, { "mac":[ "d3:d3:d3:d3:d3:d3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.213" } ] }
Copy to Clipboard Copied! - Save the new file.
Initialize the stack user:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Import the JSON inventory file into director and register nodes
openstack overcloud node import <inventory_file>
$ openstack overcloud node import <inventory_file>
Copy to Clipboard Copied! Replace
<inventory_file>
with the name of the file created in the first step.Assign the kernel and ramdisk images to each node:
openstack overcloud node configure <node>
$ openstack overcloud node configure <node>
Copy to Clipboard Copied!
2.3. Verifying available Red Hat Ceph Storage packages
Verify all required packages are available to avoid overcloud deployment failures.
2.3.1. Verifying cephadm package installation
Verify the cephadm
package is installed on at least one overcloud node. The cephadm
package is used to bootstrap the first node of the Ceph Storage cluster.
The cephadm
package is included in the overcloud-hardened-uefi-full.qcow2
image. The tripleo_cephadm
role uses the Ansible package module to ensure it is present in the image.
2.4. Deploying the software image for an HCI environment
Nodes configured for an HCI environment must use the overcloud-hardened-uefi-full.qcow2
software image. Using this software image requires a Red Hat OpenStack Platform (RHOSP) subscription.
Procedure
-
Open your
/home/stack/templates/overcloud-baremetal-deploy.yaml
file. Add or update the
image
property for nodes that require theovercloud-hardened-uefi-full
image. You can set the image to be used on specific nodes, or for all nodes that use a specific role:Specific nodes
- name: Ceph count: 3 instances: - hostname: overcloud-ceph-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-minimal.qcow2 - hostname: overcloud-ceph-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2 - hostname: overcloud-ceph-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2
- name: Ceph count: 3 instances: - hostname: overcloud-ceph-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-minimal.qcow2 - hostname: overcloud-ceph-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2 - hostname: overcloud-ceph-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2
Copy to Clipboard Copied! All nodes configured for a specific role
- name: ComputeHCI count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2 instances: - hostname: overcloud-ceph-0 name: node00 - hostname: overcloud-ceph-1 name: node01 - hostname: overcloud-ceph-2 name: node02
- name: ComputeHCI count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2 instances: - hostname: overcloud-ceph-0 name: node00 - hostname: overcloud-ceph-1 name: node01 - hostname: overcloud-ceph-2 name: node02
Copy to Clipboard Copied! In the
roles_data.yaml
role definition file, set therhsm_enforce
parameter toFalse
.rhsm_enforce: False
rhsm_enforce: False
Copy to Clipboard Copied! Run the provisioning command:
openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
(undercloud)$ openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
Copy to Clipboard Copied! -
Pass the
overcloud-baremetal-deployed.yaml
environment file to theopenstack overcloud deploy
command.
2.5. Designating nodes for HCI
To designate nodes for HCI, you must create a new role file to configure the ComputeHCI
role, and configure the bare metal nodes with a resource class for ComputeHCI
.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
credentials file:source ~/stackrc
[stack@director ~]$ source ~/stackrc
Copy to Clipboard Copied! Generate a new roles data file named
roles_data.yaml
that includes theController
andComputeHCI
roles:openstack overcloud roles generate Controller ComputeHCI -o ~/roles_data.yaml
(undercloud)$ openstack overcloud roles generate Controller ComputeHCI -o ~/roles_data.yaml
Copy to Clipboard Copied! Open
roles_data.yaml
and ensure that it has the following parameters and sections:Section/Parameter Value Role comment
Role: ComputeHCI
Role name
name: ComputeHCI
description
HCI role
HostnameFormatDefault
%stackname%-novaceph-%index%
deprecated_nic_config_name
ceph.yaml
-
Register the ComputeHCI nodes for the overcloud by adding them to your node definition template,
node.json
ornode.yaml
. Inspect the node hardware:
openstack overcloud node introspect --all-manageable --provide
(undercloud)$ openstack overcloud node introspect --all-manageable --provide
Copy to Clipboard Copied! Tag each bare metal node that you want to designate for HCI with a custom HCI resource class:
openstack baremetal node set \ --resource-class baremetal.HCI <node>
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.HCI <node>
Copy to Clipboard Copied! Replace
<node>
with the ID of the bare metal node.Add the
ComputeHCI
role to your/home/stack/templates/overcloud-baremetal-deploy.yaml
file, and define any predictive node placements, resource classes, or other attributes that you want to assign to your nodes:- name: Controller count: 3 - name: ComputeHCI count: 1 defaults: resource_class: baremetal.HCI
- name: Controller count: 3 - name: ComputeHCI count: 1 defaults: resource_class: baremetal.HCI
Copy to Clipboard Copied! Open the
baremetal.yaml
file and ensure that it contains the network configuration necessary for HCI. The following is an example configuration:- name: ComputeHCI count: 3 hostname_format: compute-hci-%index% defaults: profile: ComputeHCI network_config: template: /home/stack/templates/three-nics-vlans/compute-hci.j2 networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internalapi subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01
- name: ComputeHCI count: 3 hostname_format: compute-hci-%index% defaults: profile: ComputeHCI network_config: template: /home/stack/templates/three-nics-vlans/compute-hci.j2 networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internalapi subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01
Copy to Clipboard Copied! NoteNetwork configuration in the
ComputeHCI
role contains thestorage_mgmt
network. CephOSD nodes use this network to make redundant copies of data. The network configuration for theCompute
role does not contain this network.See Bare Metal Provisioning for more information.
Run the provisioning command:
openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
(undercloud)$ openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
Copy to Clipboard Copied! Monitor the provisioning progress in a separate terminal.
watch openstack baremetal node list
(undercloud)$ watch openstack baremetal node list
Copy to Clipboard Copied! NoteThe
watch
command renews every 2 seconds by default. The-n
option sets the renewal timer to a different value.-
To stop the
watch
process, enterCtrl-c
. -
Verification: When provisioning is successful, the node state changes from
available
toactive
.
2.6. Defining the root disk for multi-disk Ceph clusters
Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process.
Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Section 2.6.1, “Properties that identify the root disk”.
Procedure
Verify the disk information from the hardware introspection of each node:
openstack baremetal introspection data save <node_uuid> | --file <output_file_name>
(undercloud)$ openstack baremetal introspection data save <node_uuid> | --file <output_file_name>
Copy to Clipboard Copied! -
Replace
<node_uuid>
with the UUID of the node. Replace
<output_file_name>
with the name of the file that contains the output of the node introspection.For example, the data for one node might show three disks:
[ { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sda", "wwn_vendor_extension": "0x1ea4dcc412a9632b", "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b", "model": "PERC H330 Mini", "wwn": "0x61866da04f380700", "serial": "61866da04f3807001ea4dcc412a9632b" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdb", "wwn_vendor_extension": "0x1ea4e13c12e36ad6", "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6", "model": "PERC H330 Mini", "wwn": "0x61866da04f380d00", "serial": "61866da04f380d001ea4e13c12e36ad6" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdc", "wwn_vendor_extension": "0x1ea4e31e121cfb45", "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45", "model": "PERC H330 Mini", "wwn": "0x61866da04f37fc00", "serial": "61866da04f37fc001ea4e31e121cfb45" } ]
[ { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sda", "wwn_vendor_extension": "0x1ea4dcc412a9632b", "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b", "model": "PERC H330 Mini", "wwn": "0x61866da04f380700", "serial": "61866da04f3807001ea4dcc412a9632b" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdb", "wwn_vendor_extension": "0x1ea4e13c12e36ad6", "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6", "model": "PERC H330 Mini", "wwn": "0x61866da04f380d00", "serial": "61866da04f380d001ea4e13c12e36ad6" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdc", "wwn_vendor_extension": "0x1ea4e31e121cfb45", "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45", "model": "PERC H330 Mini", "wwn": "0x61866da04f37fc00", "serial": "61866da04f37fc001ea4e31e121cfb45" } ]
Copy to Clipboard Copied!
-
Replace
Set the root disk for the node by using a unique hardware property:
(undercloud)$ openstack baremetal node set --property root_device='{<property_value>}' <node-uuid>
-
Replace
<property_value>
with the unique hardware property value from the introspection data to use to set the root disk. Replace
<node_uuid>
with the UUID of the node.NoteA unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk:
(undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
-
Replace
- Configure the BIOS of each node to first boot from the network and then the root disk.
Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision
command, director provisions and writes the overcloud image to the root disk.
2.6.1. Properties that identify the root disk
There are several properties that you can define to help director identify the root disk:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name
(String): The name of the device, for example: /dev/sdb1.
Use the name
property for devices with persistent names. Do not use the name
property to set the root disk for devices that do not have persistent names because the value can change when the node boots.