Chapter 7. Deploying a RHOSP hyperconverged infrastructure (HCI) with director Operator
You can use director Operator (OSPdO) to deploy an overcloud with hyperconverged infrastructure (HCI). An overcloud with HCI colocates Compute and Red Hat Ceph Storage OSD services on the same nodes.
7.1. Prerequisites
- Your Compute HCI nodes require extra disks to use as OSDs.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfig
custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMaps
to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGenerator
cusstom resource to render Ansible playbooks for overcloud configuration.
7.2. Creating a roles_data.yaml file with the Compute HCI role for director Operator
To include configuration for the Compute HCI role in your overcloud, you must include the Compute HCI role in the roles_data.yaml
file that you include with your overcloud deployment.
Ensure that you use roles_data.yaml
as the file name.
Procedure
Access the remote shell for
openstackclient
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Unset the
OS_CLOUD
environment variable:Copy to Clipboard Copied! Toggle word wrap Toggle overflow unset OS_CLOUD
$ unset OS_CLOUD
Change to the
cloud-admin
directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd /home/cloud-admin/
$ cd /home/cloud-admin/
Generate a new
roles_data.yaml
file with theController
andComputeHCI
roles:Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCI
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCI
Exit the
openstackclient
pod:Copy to Clipboard Copied! Toggle word wrap Toggle overflow exit
$ exit
Copy the custom
roles_data.yaml
file from theopenstackclient
pod to your custom templates directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack
$ oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack
Additional resources
7.3. Configuring HCI networking in director Operator
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute HCI role.
Procedure
Create a directory for your custom templates:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir custom_templates
$ mkdir custom_templates
-
Create a custom template file named
multiple_nics_vlans_dvr.j2
in yourcustom_templates
directory. -
Add configuration for the NICs of your bare-metal nodes to your
multiple_nics_vlans_dvr.j2
file. For an example NIC configuration file, see Custom NIC heat template for HCI Compute nodes. Create a directory for your custom environment files:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir custom_environment_files
$ mkdir custom_environment_files
Map the NIC template for your overcloud role in the
network-environment.yaml
environment file in yourcustom_environment_files
directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow parameter_defaults: ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
parameter_defaults: ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
Additional resources
7.4. Custom NIC heat template for HCI Compute nodes
The following example is a heat template that contains NIC configuration for the HCI Compute bare metal nodes. The configuration in the heat template maps the networks to the following bridges and interfaces:
Networks | Bridge | Interface |
---|---|---|
Control Plane, Storage, Internal API | N/A |
|
External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2
in your custom_templates
directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: # BMH provisioning interface used for ctlplane - type: interface name: nic1 mtu: 1500 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} # Disable OCP cluster interface - type: interface name: nic2 mtu: 1500 use_dhcp: false {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network == 'External' %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false {% if network in role_networks %} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} members: - type: interface name: nic3 mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} primary: true {% endif %} {% endfor %} - type: ovs_bridge name: br-tenant mtu: {{ min_viable_mtu }} use_dhcp: false members: - type: interface name: nic4 mtu: {{ min_viable_mtu }} use_dhcp: false primary: true {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network not in ["External"] and network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} {% endfor %}
{% set mtu_list = [ctlplane_mtu] %}
{% for network in role_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
# BMH provisioning interface used for ctlplane
- type: interface
name: nic1
mtu: 1500
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
routes: {{ ctlplane_host_routes }}
# Disable OCP cluster interface
- type: interface
name: nic2
mtu: 1500
use_dhcp: false
{% for network in networks_all if network not in networks_skip_config|default([]) %}
{% if network == 'External' %}
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
dns_servers: {{ ctlplane_dns_nameservers }}
use_dhcp: false
{% if network in role_networks %}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endif %}
members:
- type: interface
name: nic3
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
primary: true
{% endif %}
{% endfor %}
- type: ovs_bridge
name: br-tenant
mtu: {{ min_viable_mtu }}
use_dhcp: false
members:
- type: interface
name: nic4
mtu: {{ min_viable_mtu }}
use_dhcp: false
primary: true
{% for network in networks_all if network not in networks_skip_config|default([]) %}
{% if network not in ["External"] and network in role_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endif %}
{% endfor %}
7.5. Adding custom templates to the overcloud configuration
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap
object named tripleo-tarball-config
. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd ~/custom_templates
$ cd ~/custom_templates
Archive the templates into a gzipped tarball:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -cvzf custom-config.tar.gz *.yaml
$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config ConfigMap
CR and use the tarball as data:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
Verify that the
ConfigMap
CR is created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get configmap/tripleo-tarball-config -n openstack
$ oc get configmap/tripleo-tarball-config -n openstack
Additional resources
7.6. Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator
The following example is an environment file that contains Red Hat Ceph Storage configuration for the Compute HCI nodes. This configuration maps the OSD nodes to the sdb
, sdc
, and sdd
devices and enables HCI with the is_hci
option.
You can modify this configuration to suit the storage configuration of your bare-metal nodes. Use the "Ceph Placement Groups (PGs) per Pool Calculator" to determine the value for the CephPoolDefaultPgNum
parameter.
To use this template in your deployment, copy the contents of the example to compute-hci.yaml
in your custom_environment_files
directory on your workstation.
resource_registry: OS::TripleO::Services::CephMgr: deployment/cephadm/ceph-mgr.yaml OS::TripleO::Services::CephMon: deployment/cephadm/ceph-mon.yaml OS::TripleO::Services::CephOSD: deployment/cephadm/ceph-osd.yaml OS::TripleO::Services::CephClient: deployment/cephadm/ceph-client.yaml parameter_defaults: CephDynamicSpec: true CephSpecFqdn: true CephConfigOverrides: rgw_swift_enforce_content_length: true rgw_swift_versioning_enabled: true osd: osd_memory_target_autotune: true osd_numa_auto_affinity: true mgr: mgr/cephadm/autotune_memory_target_ratio: 0.2 CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd CinderRbdPoolName: "volumes" NovaRbdPoolName: "vms" GlanceRbdPoolName: "images" CephPoolDefaultPgNum: 32 CephPoolDefaultSize: 2
resource_registry:
OS::TripleO::Services::CephMgr: deployment/cephadm/ceph-mgr.yaml
OS::TripleO::Services::CephMon: deployment/cephadm/ceph-mon.yaml
OS::TripleO::Services::CephOSD: deployment/cephadm/ceph-osd.yaml
OS::TripleO::Services::CephClient: deployment/cephadm/ceph-client.yaml
parameter_defaults:
CephDynamicSpec: true
CephSpecFqdn: true
CephConfigOverrides:
rgw_swift_enforce_content_length: true
rgw_swift_versioning_enabled: true
osd:
osd_memory_target_autotune: true
osd_numa_auto_affinity: true
mgr:
mgr/cephadm/autotune_memory_target_ratio: 0.2
CinderEnableIscsiBackend: false
CinderEnableRbdBackend: true
CinderBackupBackend: ceph
CinderEnableNfsBackend: false
NovaEnableRbdBackend: true
GlanceBackend: rbd
CinderRbdPoolName: "volumes"
NovaRbdPoolName: "vms"
GlanceRbdPoolName: "images"
CephPoolDefaultPgNum: 32
CephPoolDefaultSize: 2
7.7. Adding custom environment files to the overcloud configuration
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap
object named heat-env-config
to store and retrieve environment files. The ConfigMap
object stores the environment files in the following format:
... data: <environment_file_name>: |+ <environment_file_contents>
...
data:
<environment_file_name>: |+
<environment_file_contents>
For example, the following ConfigMap
contains two environment files:
... data: network_environment.yaml: |+ parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
...
data:
network_environment.yaml: |+
parameter_defaults:
ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
cloud_name.yaml: |+
parameter_defaults:
CloudDomain: ocp4.example.com
CloudName: overcloud.ocp4.example.com
CloudNameInternal: overcloud.internalapi.ocp4.example.com
CloudNameStorage: overcloud.storage.ocp4.example.com
CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com
CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
Upload a set of custom environment files from a directory to a ConfigMap
object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMap
object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
-
Replace
<dir_custom_environment_files>
with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMap
object stores these as individualdata
entries.
-
Replace
Verify that the
heat-env-config ConfigMap
object contains all the required environment files:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get configmap/heat-env-config -n openstack
$ oc get configmap/heat-env-config -n openstack
7.8. Creating HCI Compute nodes and deploying the overcloud
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet
custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet
CRD definition and specification schema:
oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec
$ oc describe crd openstackbaremetalset
$ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlane
CRD.
Procedure
Create a file named
openstack-hcicompute.yaml
on your workstation. Include the resource specification for the HCI Compute nodes. For example, the specification for 3 HCI Compute nodes is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computehci namespace: openstack spec: count: 3 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys ctlplaneInterface: enp8s0 networks: - ctlplane - internal_api - tenant - storage - storage_mgmt roleName: ComputeHCI passwordSecret: userpassword
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computehci
1 namespace: openstack
2 spec:
3 count: 3 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys ctlplaneInterface: enp8s0 networks: - ctlplane - internal_api - tenant - storage - storage_mgmt roleName: ComputeHCI passwordSecret: userpassword
4 -
Save the
openstack-hcicompute.yaml
file. Create the HCI Compute nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f openstack-hcicompute.yaml -n openstack
$ oc create -f openstack-hcicompute.yaml -n openstack
Verify that the resource for the HCI Compute nodes is created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get openstackbaremetalset/computehci -n openstack
$ oc get openstackbaremetalset/computehci -n openstack
To verify the creation of the HCI Compute nodes, view the bare-metal machines that RHOCP manages:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get baremetalhosts -n openshift-machine-api
$ oc get baremetalhosts -n openshift-machine-api
-
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGenerator
CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.