Search

Chapter 2. Deploying HCI hardware

download PDF

This section contains procedures and information about the preparation and configuration of hyperconverged nodes.

Prerequisites

2.1. Cleaning Ceph Storage node disks

Ceph Storage OSDs and journal partitions require factory clean disks. All data and metadata must be erased by the Bare Metal Provisioning service (ironic) from these disks before installing the Ceph OSD services.

You can configure director to delete all disk data and metadata by default by using the Bare Metal Provisioning service. When director is configured to perform this task, the Bare Metal Provisioning service performs an additional step to boot the nodes each time a node is set to available.

Warning

The Bare Metal Provisioning service uses the wipefs --force --all command. This command deletes all data and metadata on the disk but it does not perform a secure erase. A secure erase takes much longer.

Procedure

  1. Open /home/stack/undercloud.conf and add the following parameter:

    clean_nodes=true
  2. Save /home/stack/undercloud.conf.
  3. Update the undercloud configuration.

    openstack undercloud install

2.2. Registering nodes

Register the nodes to enable communication with director.

Procedure

  1. Create a node inventory JSON file in /home/stack.
  2. Enter hardware and power management details for each node.

    For example:

    {
        "nodes":[
            {
                "mac":[
                    "b1:b1:b1:b1:b1:b1"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.205"
            },
            {
                "mac":[
                    "b2:b2:b2:b2:b2:b2"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.206"
            },
            {
                "mac":[
                    "b3:b3:b3:b3:b3:b3"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.207"
            },
            {
                "mac":[
                    "c1:c1:c1:c1:c1:c1"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.208"
            },
            {
                "mac":[
                    "c2:c2:c2:c2:c2:c2"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.209"
            },
            {
                "mac":[
                    "c3:c3:c3:c3:c3:c3"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.210"
            },
            {
                "mac":[
                    "d1:d1:d1:d1:d1:d1"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.211"
            },
            {
                "mac":[
                    "d2:d2:d2:d2:d2:d2"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.212"
            },
            {
                "mac":[
                    "d3:d3:d3:d3:d3:d3"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.213"
            }
        ]
    }
  3. Save the new file.
  4. Initialize the stack user:

    $ source ~/stackrc
  5. Import the JSON inventory file into director and register nodes

    $ openstack overcloud node import <inventory_file>

    Replace <inventory_file> with the name of the file created in the first step.

  6. Assign the kernel and ramdisk images to each node:

    $ openstack overcloud node configure <node>

2.3. Verifying available Red Hat Ceph Storage packages

Verify all required packages are available to avoid overcloud deployment failures.

2.3.1. Verifying cephadm package installation

Verify the cephadm package is installed on at least one overcloud node. The cephadm package is used to bootstrap the first node of the Ceph Storage cluster.

The cephadm package is included in the overcloud-hardened-uefi-full.qcow2 image. The tripleo_cephadm role uses the Ansible package module to ensure it is present in the image.

2.4. Deploying the software image for an HCI environment

Nodes configured for an HCI environment must use the overcloud-hardened-uefi-full.qcow2 software image. Using this software image requires a Red Hat OpenStack Platform (RHOSP) subscription.

Procedure

  1. Open your /home/stack/templates/overcloud-baremetal-deploy.yaml file.
  2. Add or update the image property for nodes that require the overcloud-hardened-uefi-full image. You can set the image to be used on specific nodes, or for all nodes that use a specific role:

    Specific nodes

    - name: Ceph
      count: 3
      instances:
      - hostname: overcloud-ceph-0
        name: node00
        image:
          href: file:///var/lib/ironic/images/overcloud-minimal.qcow2
      - hostname: overcloud-ceph-1
        name: node01
        image:
          href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2
      - hostname: overcloud-ceph-2
        name: node02
        image:
          href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2

    All nodes configured for a specific role

    - name: ComputeHCI
      count: 3
      defaults:
        image:
          href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2
      instances:
      - hostname: overcloud-ceph-0
        name: node00
      - hostname: overcloud-ceph-1
        name: node01
      - hostname: overcloud-ceph-2
        name: node02

  3. In the roles_data.yaml role definition file, set the rhsm_enforce parameter to False.

    rhsm_enforce: False
  4. Run the provisioning command:

    (undercloud)$ openstack overcloud node provision \
    --stack overcloud \
    --output /home/stack/templates/overcloud-baremetal-deployed.yaml \
    /home/stack/templates/overcloud-baremetal-deploy.yaml
  5. Pass the overcloud-baremetal-deployed.yaml environment file to the openstack overcloud deploy command.

2.5. Designating nodes for HCI

To designate nodes for HCI, you must create a new role file to configure the ComputeHCI role, and configure the bare metal nodes with a resource class for ComputeHCI.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc credentials file:

    [stack@director ~]$ source ~/stackrc
  3. Generate a new roles data file named roles_data.yaml that includes the Controller and ComputeHCI roles:

    (undercloud)$ openstack overcloud roles generate Controller ComputeHCI -o ~/roles_data.yaml
  4. Open roles_data.yaml and ensure that it has the following parameters and sections:

    Section/ParameterValue

    Role comment

    Role: ComputeHCI

    Role name

    name: ComputeHCI

    description

    HCI role

    HostnameFormatDefault

    %stackname%-novaceph-%index%

    deprecated_nic_config_name

    ceph.yaml

  5. Register the ComputeHCI nodes for the overcloud by adding them to your node definition template, node.json or node.yaml.
  6. Inspect the node hardware:

    (undercloud)$ openstack overcloud node introspect --all-manageable --provide
  7. Tag each bare metal node that you want to designate for HCI with a custom HCI resource class:

    (undercloud)$ openstack baremetal node set \
     --resource-class baremetal.HCI <node>

    Replace <node> with the ID of the bare metal node.

  8. Add the ComputeHCI role to your /home/stack/templates/overcloud-baremetal-deploy.yaml file, and define any predictive node placements, resource classes, or other attributes that you want to assign to your nodes:

    - name: Controller
      count: 3
    - name: ComputeHCI
      count: 1
      defaults:
        resource_class: baremetal.HCI
  9. Open the baremetal.yaml file and ensure that it contains the network configuration necessary for HCI. The following is an example configuration:

    - name: ComputeHCI
      count: 3
      hostname_format: compute-hci-%index%
      defaults:
        profile: ComputeHCI
        network_config:
          template: /home/stack/templates/three-nics-vlans/compute-hci.j2
        networks:
        - network: ctlplane
          vif: true
        - network: external
          subnet: external_subnet
        - network: internalapi
          subnet: internal_api_subnet01
        - network: storage
          subnet: storage_subnet01
        - network: storage_mgmt
          subnet: storage_mgmt_subnet01
        - network: tenant
          subnet: tenant_subnet01
    Note

    Network configuration in the ComputeHCI role contains the storage_mgmt network. CephOSD nodes use this network to make redundant copies of data. The network configuration for the Compute role does not contain this network.

    See Configuring the Bare Metal Provisioning service for more information.

  10. Run the provisioning command:

    (undercloud)$ openstack overcloud node provision \
    --stack overcloud \
    --output /home/stack/templates/overcloud-baremetal-deployed.yaml \
    /home/stack/templates/overcloud-baremetal-deploy.yaml
  11. Monitor the provisioning progress in a separate terminal.

    (undercloud)$ watch openstack baremetal node list
    Note

    The watch command renews every 2 seconds by default. The -n option sets the renewal timer to a different value.

  12. To stop the watch process, enter Ctrl-c.
  13. Verification: When provisioning is successful, the node state changes from available to active.

Additional resources

2.6. Defining the root disk for multi-disk Ceph clusters

Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process.

Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Properties that identify the root disk.

Procedure

  1. Verify the disk information from the hardware introspection of each node:

    (undercloud)$ openstack baremetal introspection data save <node_uuid> --file <output_file_name>
    • Replace <node_uuid> with the UUID of the node.
    • Replace <output_file_name> with the name of the file that contains the output of the node introspection.

      For example, the data for one node might show three disks:

      [
        {
          "size": 299439751168,
          "rotational": true,
          "vendor": "DELL",
          "name": "/dev/sda",
          "wwn_vendor_extension": "0x1ea4dcc412a9632b",
          "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b",
          "model": "PERC H330 Mini",
          "wwn": "0x61866da04f380700",
          "serial": "61866da04f3807001ea4dcc412a9632b"
        }
        {
          "size": 299439751168,
          "rotational": true,
          "vendor": "DELL",
          "name": "/dev/sdb",
          "wwn_vendor_extension": "0x1ea4e13c12e36ad6",
          "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6",
          "model": "PERC H330 Mini",
          "wwn": "0x61866da04f380d00",
          "serial": "61866da04f380d001ea4e13c12e36ad6"
        }
        {
          "size": 299439751168,
          "rotational": true,
          "vendor": "DELL",
          "name": "/dev/sdc",
          "wwn_vendor_extension": "0x1ea4e31e121cfb45",
          "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45",
          "model": "PERC H330 Mini",
          "wwn": "0x61866da04f37fc00",
          "serial": "61866da04f37fc001ea4e31e121cfb45"
        }
      ]
  2. Set the root disk for the node by using a unique hardware property:

    (undercloud)$ openstack baremetal node set --property root_device='{<property_value>}' <node-uuid>

    • Replace <property_value> with the unique hardware property value from the introspection data to use to set the root disk.
    • Replace <node_uuid> with the UUID of the node.

      Note

      A unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk:

      (undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0

  3. Configure the BIOS of each node to first boot from the network and then the root disk.

Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision command, director provisions and writes the overcloud image to the root disk.

2.6.1. Properties that identify the root disk

There are several properties that you can define to help director identify the root disk:

  • model (String): Device identifier.
  • vendor (String): Device vendor.
  • serial (String): Disk serial number.
  • hctl (String): Host:Channel:Target:Lun for SCSI.
  • size (Integer): Size of the device in GB.
  • wwn (String): Unique storage identifier.
  • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
  • wwn_vendor_extension (String): Unique vendor storage identifier.
  • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
  • name (String): The name of the device, for example: /dev/sdb1.
Important

Use the name property for devices with persistent names. Do not use the name property to set the root disk for devices that do not have persistent names because the value can change when the node boots.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.