Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Deploying HCI hardware
This section contains procedures and information about the preparation and configuration of hyperconverged nodes.
Prerequisites
- You have read Deploying an overcloud and Red Hat Ceph Storage in Deploying Red Hat Ceph and OpenStack together with director.
2.1. Cleaning Ceph Storage node disks Copier lienLien copié sur presse-papiers!
Ceph Storage OSDs and journal partitions require factory clean disks. All data and metadata must be erased by the Bare Metal Provisioning service (ironic) from these disks before installing the Ceph OSD services.
You can configure director to delete all disk data and metadata by default by using the Bare Metal Provisioning service. When director is configured to perform this task, the Bare Metal Provisioning service performs an additional step to boot the nodes each time a node is set to available.
The Bare Metal Provisioning service uses the wipefs --force --all command. This command deletes all data and metadata on the disk but it does not perform a secure erase. A secure erase takes much longer.
Procedure
Open
/home/stack/undercloud.confand add the following parameter:clean_nodes=true
clean_nodes=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save
/home/stack/undercloud.conf. Update the undercloud configuration.
openstack undercloud install
openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Registering nodes Copier lienLien copié sur presse-papiers!
Register the nodes to enable communication with director.
Procedure
-
Create a node inventory JSON file in
/home/stack. Enter hardware and power management details for each node.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the new file.
Initialize the stack user:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Import the JSON inventory file into director and register nodes
openstack overcloud node import <inventory_file>
$ openstack overcloud node import <inventory_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<inventory_file>with the name of the file created in the first step.Assign the kernel and ramdisk images to each node:
openstack overcloud node configure <node>
$ openstack overcloud node configure <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Verifying available Red Hat Ceph Storage packages Copier lienLien copié sur presse-papiers!
Verify all required packages are available to avoid overcloud deployment failures.
2.3.1. Verifying cephadm package installation Copier lienLien copié sur presse-papiers!
Verify the cephadm package is installed on at least one overcloud node. The cephadm package is used to bootstrap the first node of the Ceph Storage cluster.
The cephadm package is included in the overcloud-hardened-uefi-full.qcow2 image. The tripleo_cephadm role uses the Ansible package module to ensure it is present in the image.
2.4. Deploying the software image for an HCI environment Copier lienLien copié sur presse-papiers!
Nodes configured for an HCI environment must use the overcloud-hardened-uefi-full.qcow2 software image. Using this software image requires a Red Hat OpenStack Platform (RHOSP) subscription.
Procedure
-
Open your
/home/stack/templates/overcloud-baremetal-deploy.yamlfile. Add or update the
imageproperty for nodes that require theovercloud-hardened-uefi-fullimage. You can set the image to be used on specific nodes, or for all nodes that use a specific role:Specific nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow All nodes configured for a specific role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
roles_data.yamlrole definition file, set therhsm_enforceparameter toFalse.rhsm_enforce: False
rhsm_enforce: FalseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the provisioning command:
openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
(undercloud)$ openstack overcloud node provision \ --stack overcloud \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Pass the
overcloud-baremetal-deployed.yamlenvironment file to theopenstack overcloud deploycommand.
2.5. Designating nodes for HCI Copier lienLien copié sur presse-papiers!
To designate nodes for HCI, you must create a new role file to configure the ComputeHCI role, and configure the bare metal nodes with a resource class for ComputeHCI.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrccredentials file:source ~/stackrc
[stack@director ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new roles data file named
roles_data.yamlthat includes theControllerandComputeHCIroles:openstack overcloud roles generate Controller ComputeHCI -o ~/roles_data.yaml
(undercloud)$ openstack overcloud roles generate Controller ComputeHCI -o ~/roles_data.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open
roles_data.yamland ensure that it has the following parameters and sections:Expand Section/Parameter Value Role comment
Role: ComputeHCIRole name
name: ComputeHCIdescriptionHCI roleHostnameFormatDefault%stackname%-novaceph-%index%deprecated_nic_config_nameceph.yaml-
Register the ComputeHCI nodes for the overcloud by adding them to your node definition template,
node.jsonornode.yaml. Inspect the node hardware:
openstack overcloud node introspect --all-manageable --provide
(undercloud)$ openstack overcloud node introspect --all-manageable --provideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Tag each bare metal node that you want to designate for HCI with a custom HCI resource class:
openstack baremetal node set \ --resource-class baremetal.HCI <node>
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.HCI <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<node>with the ID of the bare metal node.Add the
ComputeHCIrole to your/home/stack/templates/overcloud-baremetal-deploy.yamlfile, and define any predictive node placements, resource classes, or other attributes that you want to assign to your nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
baremetal.yamlfile and ensure that it contains the network configuration necessary for HCI. The following is an example configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNetwork configuration in the
ComputeHCIrole contains thestorage_mgmtnetwork. CephOSD nodes use this network to make redundant copies of data. The network configuration for theComputerole does not contain this network.See Configuring the Bare Metal Provisioning service for more information.
Run the provisioning command:
openstack overcloud node provision \ --stack overcloud \ --network_config \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
(undercloud)$ openstack overcloud node provision \ --stack overcloud \ --network_config \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the provisioning progress in a separate terminal.
watch openstack baremetal node list
(undercloud)$ watch openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
watchcommand renews every 2 seconds by default. The-noption sets the renewal timer to a different value.-
To stop the
watchprocess, enterCtrl-c. -
Verification: When provisioning is successful, the node state changes from
availabletoactive.
2.6. Defining the root disk for multi-disk Ceph clusters Copier lienLien copié sur presse-papiers!
Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process.
Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Properties that identify the root disk.
Procedure
Verify the disk information from the hardware introspection of each node:
openstack baremetal introspection data save <node_uuid> --file <output_file_name>
(undercloud)$ openstack baremetal introspection data save <node_uuid> --file <output_file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<node_uuid>with the UUID of the node. Replace
<output_file_name>with the name of the file that contains the output of the node introspection.For example, the data for one node might show three disks:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
Set the root disk for the node by using a unique hardware property:
(undercloud)$ openstack baremetal node set --property root_device='{<property_value>}' <node-uuid>-
Replace
<property_value>with the unique hardware property value from the introspection data to use to set the root disk. Replace
<node_uuid>with the UUID of the node.NoteA unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk:
(undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
-
Replace
- Configure the BIOS of each node to first boot from the network and then the root disk.
Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision command, director provisions and writes the overcloud image to the root disk.
2.6.1. Properties that identify the root disk Copier lienLien copié sur presse-papiers!
There are several properties that you can define to help director identify the root disk:
-
model(String): Device identifier. -
vendor(String): Device vendor. -
serial(String): Disk serial number. -
hctl(String): Host:Channel:Target:Lun for SCSI. -
size(Integer): Size of the device in GB. -
wwn(String): Unique storage identifier. -
wwn_with_extension(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension(String): Unique vendor storage identifier. -
rotational(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name(String): The name of the device, for example: /dev/sdb1.
Use the name property for devices with persistent names. Do not use the name property to set the root disk for devices that do not have persistent names because the value can change when the node boots.