1.2. Preparing the overcloud role for hyperconverged nodes
To designate nodes as hyperconverged, you need to define a hyperconverged role. Red Hat OpenStack Platform (RHOSP) provides the predefined role ComputeHCI for hyperconverged nodes. This role colocates the Compute and Ceph object storage daemon (OSD) services, allowing you to deploy them together on the same hyperconverged node.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
[stack@director ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new custom roles data file that includes the
ComputeHCIrole, along with other roles you intend to use for the overcloud. The following example generates the roles data fileroles_data_hci.yamlthat includes the rolesController,ComputeHCI,Compute, andCephStorage:openstack overcloud roles \ generate -o /home/stack/templates/roles_data_hci.yaml \ Controller ComputeHCI Compute CephStorage
(undercloud)$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_hci.yaml \ Controller ComputeHCI Compute CephStorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow 注記The networks listed for the
ComputeHCIrole in the generated custom roles data file include the networks required for both Compute and Storage services, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a local copy of the
network_data.yamlfile to add a composable network to your overcloud. Thenetwork_data.yamlfile interacts with the default network environment files,/usr/share/openstack-tripleo-heat-templates/environments/*, to associate the networks you defined for yourComputeHCIrole with the hyperconverged nodes. For more information, see Adding a composable network in the Advanced Overcloud Customization guide. -
To improve the performance of Red Hat Ceph Storage, update the MTU setting for both the
StorageandStorageMgmtnetworks to9000, for jumbo frames, in your local copy ofnetwork_data.yaml. For more information, see Configuring MTU Settings in Director and Configuring jumbo frames. Create the
computeHCIovercloud flavor for hyperconverged nodes:openstack flavor create --id auto \ --ram <ram_size_mb> --disk <disk_size_gb> \ --vcpus <no_vcpus> computeHCI
(undercloud)$ openstack flavor create --id auto \ --ram <ram_size_mb> --disk <disk_size_gb> \ --vcpus <no_vcpus> computeHCICopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<ram_size_mb>with the RAM of the bare metal node, in MB. -
Replace
<disk_size_gb>with the size of the disk on the bare metal node, in GB. -
Replace
<no_vcpus>with the number of CPUs on the bare metal node.
注記These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.
-
Replace
Retrieve a list of your nodes to identify their UUIDs:
openstack baremetal node list
(undercloud)$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Tag each bare metal node that you want to designate as hyperconverged with a custom HCI resource class:
openstack baremetal node set \ --resource-class baremetal.HCI <node>
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.HCI <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<node>with the ID of the bare metal node.Associate the
computeHCIflavor with the custom HCI resource class:openstack flavor set \ --property resources:CUSTOM_BAREMETAL_HCI=1 \ computeHCI
(undercloud)$ openstack flavor set \ --property resources:CUSTOM_BAREMETAL_HCI=1 \ computeHCICopy to Clipboard Copied! Toggle word wrap Toggle overflow To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with
CUSTOM_.注記A flavor can request only one instance of a bare metal resource class.
Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances:
openstack flavor set \ --property resources:VCPU=0 \ --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 computeHCI
(undercloud)$ openstack flavor set \ --property resources:VCPU=0 \ --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 computeHCICopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following parameters to the
node-info.yamlfile to specify the number of hyperconverged and Controller nodes, and the flavor to use for the hyperconverged and controller designated nodes:parameter_defaults: OvercloudComputeHCIFlavor: computeHCI ComputeHCICount: 3 OvercloudControlFlavor: baremetal ControllerCount: 3
parameter_defaults: OvercloudComputeHCIFlavor: computeHCI ComputeHCICount: 3 OvercloudControlFlavor: baremetal ControllerCount: 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
1.2.1. Defining the root disk リンクのコピーリンクがクリップボードにコピーされました!
Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, the director writes the overcloud image to the root disk during the provisioning process.
There are several properties that you can define to help the director identify the root disk:
-
model(String): Device identifier. -
vendor(String): Device vendor. -
serial(String): Disk serial number. -
hctl(String): Host:Channel:Target:Lun for SCSI. -
size(Integer): Size of the device in GB. -
wwn(String): Unique storage identifier. -
wwn_with_extension(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension(String): Unique vendor storage identifier. -
rotational(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name(String): The name of the device, for example: /dev/sdb1. -
by_path(String): The unique PCI path of the device. Use this property if you do not want to use the UUID of the device.
Use the name property only for devices with persistent names. Do not use name to set the root disk for any other device because this value can change when the node boots.
Complete the following steps to specify the root device using its serial number.
Procedure
Check the disk information from the hardware introspection of each node. Run the following command to display the disk information of a node:
(undercloud) $ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"
(undercloud) $ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the data for one node might show three disks:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
root_deviceparameter for the node definition. The following example shows how to set the root device to disk 2, which has61866da04f380d001ea4e13c12e36ad6as the serial number:(undercloud) $ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0(undercloud) $ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注記Ensure that you configure the BIOS of each node to include booting from the root disk that you choose. Configure the boot order to boot from the network first, then to boot from the root disk.
The director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, the director provisions and writes the Overcloud image to the root disk.