Chapter 7. Deploying a RHOSP hyperconverged infrastructure (HCI) with director Operator


You can use director Operator (OSPdO) to deploy an overcloud with hyperconverged infrastructure (HCI). An overcloud with HCI colocates Compute and Red Hat Ceph Storage OSD services on the same nodes.

7.1. Prerequisites

  • Your Compute HCI nodes require extra disks to use as OSDs.
  • You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
  • You have created the overcloud networks by using the OpenStackNetConfig custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator.
  • You have created ConfigMaps to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator.
  • You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
  • You have created and applied an OpenStackConfigGenerator cusstom resource to render Ansible playbooks for overcloud configuration.

7.2. Creating a roles_data.yaml file with the Compute HCI role for director Operator

To include configuration for the Compute HCI role in your overcloud, you must include the Compute HCI role in the roles_data.yaml file that you include with your overcloud deployment.

Note

Ensure that you use roles_data.yaml as the file name.

Procedure

  1. Access the remote shell for openstackclient:

    Copy to Clipboard Toggle word wrap
    $ oc rsh -n openstack openstackclient
  2. Unset the OS_CLOUD environment variable:

    Copy to Clipboard Toggle word wrap
    $ unset OS_CLOUD
  3. Change to the cloud-admin directory:

    Copy to Clipboard Toggle word wrap
    $ cd /home/cloud-admin/
  4. Generate a new roles_data.yaml file with the Controller and ComputeHCI roles:

    Copy to Clipboard Toggle word wrap
    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCI
  5. Exit the openstackclient pod:

    Copy to Clipboard Toggle word wrap
    $ exit
  6. Copy the custom roles_data.yaml file from the openstackclient pod to your custom templates directory:

    Copy to Clipboard Toggle word wrap
    $ oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack

Additional resources

7.3. Configuring HCI networking in director Operator

Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute HCI role.

Procedure

  1. Create a directory for your custom templates:

    Copy to Clipboard Toggle word wrap
    $ mkdir custom_templates
  2. Create a custom template file named multiple_nics_vlans_dvr.j2 in your custom_templates directory.
  3. Add configuration for the NICs of your bare-metal nodes to your multiple_nics_vlans_dvr.j2 file. For an example NIC configuration file, see Custom NIC heat template for HCI Compute nodes.
  4. Create a directory for your custom environment files:

    Copy to Clipboard Toggle word wrap
    $ mkdir custom_environment_files
  5. Map the NIC template for your overcloud role in the network-environment.yaml environment file in your custom_environment_files directory:

    Copy to Clipboard Toggle word wrap
    parameter_defaults:
      ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'

7.4. Custom NIC heat template for HCI Compute nodes

The following example is a heat template that contains NIC configuration for the HCI Compute bare metal nodes. The configuration in the heat template maps the networks to the following bridges and interfaces:

NetworksBridgeInterface

Control Plane, Storage, Internal API

N/A

nic3

External, Tenant

br-ex

nic4

To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2 in your custom_templates directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.

Example

Copy to Clipboard Toggle word wrap
{% set mtu_list = [ctlplane_mtu] %}
{% for network in role_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
# BMH provisioning interface used for ctlplane
- type: interface
  name: nic1
  mtu: 1500
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
  routes: {{ ctlplane_host_routes }}
# Disable OCP cluster interface
- type: interface
  name: nic2
  mtu: 1500
  use_dhcp: false
{% for network in networks_all if network not in networks_skip_config|default([]) %}
{% if network == 'External' %}
- type: ovs_bridge
  name: {{ neutron_physical_bridge_name }}
  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
  dns_servers: {{ ctlplane_dns_nameservers }}
  use_dhcp: false
{% if network in role_networks %}
  addresses:
  - ip_netmask:
      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endif %}
  members:
  - type: interface
    name: nic3
    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
    primary: true
{% endif %}
{% endfor %}
- type: ovs_bridge
  name: br-tenant
  mtu: {{ min_viable_mtu }}
  use_dhcp: false
  members:
  - type: interface
    name: nic4
    mtu: {{ min_viable_mtu }}
    use_dhcp: false
    primary: true
{% for network in networks_all if network not in networks_skip_config|default([]) %}
{% if network not in ["External"] and network in role_networks %}
  - type: vlan
    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
    vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
    addresses:
    - ip_netmask:
        {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
    routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endif %}
{% endfor %}

7.5. Adding custom templates to the overcloud configuration

Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.

Note

All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.

Prerequisites

  • The custom overcloud templates that you want to apply to provisioned nodes.

Procedure

  1. Navigate to the location of your custom templates:

    Copy to Clipboard Toggle word wrap
    $ cd ~/custom_templates
  2. Archive the templates into a gzipped tarball:

    Copy to Clipboard Toggle word wrap
    $ tar -cvzf custom-config.tar.gz *.yaml
  3. Create the tripleo-tarball-config ConfigMap CR and use the tarball as data:

    Copy to Clipboard Toggle word wrap
    $ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
  4. Verify that the ConfigMap CR is created:

    Copy to Clipboard Toggle word wrap
    $ oc get configmap/tripleo-tarball-config -n openstack

7.6. Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator

The following example is an environment file that contains Red Hat Ceph Storage configuration for the Compute HCI nodes. This configuration maps the OSD nodes to the sdb, sdc, and sdd devices and enables HCI with the is_hci option.

Note

You can modify this configuration to suit the storage configuration of your bare-metal nodes. Use the "Ceph Placement Groups (PGs) per Pool Calculator" to determine the value for the CephPoolDefaultPgNum parameter.

To use this template in your deployment, copy the contents of the example to compute-hci.yaml in your custom_environment_files directory on your workstation.

Copy to Clipboard Toggle word wrap
resource_registry:
  OS::TripleO::Services::CephMgr: deployment/cephadm/ceph-mgr.yaml
  OS::TripleO::Services::CephMon: deployment/cephadm/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: deployment/cephadm/ceph-osd.yaml
  OS::TripleO::Services::CephClient: deployment/cephadm/ceph-client.yaml

parameter_defaults:
  CephDynamicSpec: true
  CephSpecFqdn: true
  CephConfigOverrides:
    rgw_swift_enforce_content_length: true
    rgw_swift_versioning_enabled: true
    osd:
      osd_memory_target_autotune: true
      osd_numa_auto_affinity: true
    mgr:
      mgr/cephadm/autotune_memory_target_ratio: 0.2

  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  CinderEnableNfsBackend: false
  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  CinderRbdPoolName: "volumes"
  NovaRbdPoolName: "vms"
  GlanceRbdPoolName: "images"
  CephPoolDefaultPgNum: 32
  CephPoolDefaultSize: 2

7.7. Adding custom environment files to the overcloud configuration

To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format:

Copy to Clipboard Toggle word wrap
...
data:
  <environment_file_name>: |+
    <environment_file_contents>

For example, the following ConfigMap contains two environment files:

Copy to Clipboard Toggle word wrap
...
data:
  network_environment.yaml: |+
    parameter_defaults:
      ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
  cloud_name.yaml: |+
    parameter_defaults:
      CloudDomain: ocp4.example.com
      CloudName: overcloud.ocp4.example.com
      CloudNameInternal: overcloud.internalapi.ocp4.example.com
      CloudNameStorage: overcloud.storage.ocp4.example.com
      CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com
      CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com

Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment.

Prerequisites

  • The custom environment files for your overcloud deployment.

Procedure

  1. Create the heat-env-config ConfigMap object:

    Copy to Clipboard Toggle word wrap
    $ oc create configmap -n openstack heat-env-config \
     --from-file=~/<dir_custom_environment_files>/ \
     --dry-run=client -o yaml | oc apply -f -
    • Replace <dir_custom_environment_files> with the directory that contains the environment files you want to use in your overcloud deployment. The ConfigMap object stores these as individual data entries.
  2. Verify that the heat-env-config ConfigMap object contains all the required environment files:

    Copy to Clipboard Toggle word wrap
    $ oc get configmap/heat-env-config -n openstack

7.8. Creating HCI Compute nodes and deploying the overcloud

Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.

Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.

Tip

Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema:

Copy to Clipboard Toggle word wrap
$ oc describe crd openstackbaremetalset

$ oc explain openstackbaremetalset.spec

Prerequisites

  • You have used the OpenStackNetConfig CR to create a control plane network and any additional isolated networks.
  • You have created a control plane with the OpenStackControlPlane CRD.

Procedure

  1. Create a file named openstack-hcicompute.yaml on your workstation. Include the resource specification for the HCI Compute nodes. For example, the specification for 3 HCI Compute nodes is as follows:

    Copy to Clipboard Toggle word wrap
    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackBaremetalSet
    metadata:
      name: computehci 
    1
    
      namespace: openstack 
    2
    
    spec: 
    3
    
      count: 3
      baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2
      deploymentSSHSecret: osp-controlplane-ssh-keys
      ctlplaneInterface: enp8s0
      networks:
        - ctlplane
        - internal_api
        - tenant
        - storage
        - storage_mgmt
      roleName: ComputeHCI
      passwordSecret: userpassword 
    4
    1
    The name of the HCI Compute node bare metal set, for example, computehci.
    2
    The OSPdO namespace, for example, openstack.
    3
    The configuration for the HCI Compute nodes.
    4
    Optional: The Secret resource that provides root access on each node to users with the password.
  2. Save the openstack-hcicompute.yaml file.
  3. Create the HCI Compute nodes:

    Copy to Clipboard Toggle word wrap
    $ oc create -f openstack-hcicompute.yaml -n openstack
  4. Verify that the resource for the HCI Compute nodes is created:

    Copy to Clipboard Toggle word wrap
    $ oc get openstackbaremetalset/computehci -n openstack
  5. To verify the creation of the HCI Compute nodes, view the bare-metal machines that RHOCP manages:

    Copy to Clipboard Toggle word wrap
    $ oc get baremetalhosts -n openshift-machine-api
  6. Create the Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD.
  7. Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
  8. Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.