Chapter 8. Deploying RHOSP with an external Red Hat Ceph Storage cluster with director Operator
You can use director Operator (OSPdO) to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.
Prerequisites
- You have an external Red Hat Ceph Storage cluster.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfig
custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMaps
to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGenerator
custom resource to render Ansible playbooks for overcloud configuration.
8.1. Configuring networking for the Compute role in director Operator
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute role.
Procedure
Create a directory for your custom templates:
$ mkdir custom_templates
-
Create a custom template file named
multiple_nics_vlans_dvr.j2
in yourcustom_templates
directory. -
Add configuration for the NICs of your bare-metal Compute nodes to your
multiple_nics_vlans_dvr.j2
file. For an example NIC configuration file, see Custom NIC heat template for Compute nodes. Create a directory for your custom environment files:
$ mkdir custom_environment_files
Map the NIC template for your overcloud role in the
network-environment.yaml
environment file in yourcustom_environment_files
directory:parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
Additional resources
8.2. Custom NIC heat template for Compute nodes
The following example is a heat template that contains NIC configuration for the Compute bare-metal nodes in an overcloud that connects to an external Red Hat Ceph Storage cluster. The configuration in the heat template maps the networks to the following bridges and interfaces:
Networks | Bridge | interface |
---|---|---|
Control Plane, Storage, Internal API | N/A |
|
External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2
in your custom_templates
directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: # BMH provisioning interface used for ctlplane - type: interface name: nic1 mtu: 1500 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} # Disable OCP cluster interface - type: interface name: nic2 mtu: 1500 use_dhcp: false {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network == 'External' %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false {% if network in role_networks %} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} members: - type: interface name: nic3 mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} primary: true {% endif %} {% endfor %} - type: ovs_bridge name: br-tenant mtu: {{ min_viable_mtu }} use_dhcp: false members: - type: interface name: nic4 mtu: {{ min_viable_mtu }} use_dhcp: false primary: true {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network not in ["External"] and network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} {% endfor %}
8.3. Adding custom templates to the overcloud configuration
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap
object named tripleo-tarball-config
. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
$ cd ~/custom_templates
Archive the templates into a gzipped tarball:
$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config ConfigMap
CR and use the tarball as data:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
Verify that the
ConfigMap
CR is created:$ oc get configmap/tripleo-tarball-config -n openstack
Additional resources
8.4. Custom environment file for configuring external Ceph Storage usage in director Operator
To integrate with an external Red Hat Ceph Storage cluster, include an environment file with parameters and values similar to those shown in the following example. The example enables the CephExternal
and CephClient
services on your overcloud nodes, and sets the pools for different RHOSP services.
You can modify this configuration to suit your storage configuration.
To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml
in your custom_environment_files
directory on your workstation.
resource_registry: OS::TripleO::Services::CephExternal: deployment/cephadm/ceph-client.yaml parameter_defaults: CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' 1 CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' 2 CephExternalMonHost: '172.16.1.7, 172.16.1.8' 3 ExternalCeph: true # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova NovaEnableRbdBackend: true CinderEnableRbdBackend: true CinderBackupBackend: ceph GlanceBackend: rbd # Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd # If the Ceph pools which host VMs, Volumes and Images do not match these # names OR the client keyring to use is not named 'openstack', edit the # following as needed. NovaRbdPoolName: vms CinderRbdPoolName: volumes CinderBackupRbdPoolName: backups GlanceRbdPoolName: images # Uncomment below if enabling legacy telemetry # GnocchiRbdPoolName: metrics CephClientUserName: openstack # finally we disable the Cinder LVM backend CinderEnableIscsiBackend: false
8.5. Adding custom environment files to the overcloud configuration
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap
object named heat-env-config
to store and retrieve environment files. The ConfigMap
object stores the environment files in the following format:
... data: <environment_file_name>: |+ <environment_file_contents>
For example, the following ConfigMap
contains two environment files:
... data: network_environment.yaml: |+ parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
Upload a set of custom environment files from a directory to a ConfigMap
object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMap
object:$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
-
Replace
<dir_custom_environment_files>
with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMap
object stores these as individualdata
entries.
-
Replace
Verify that the
heat-env-config ConfigMap
object contains all the required environment files:$ oc get configmap/heat-env-config -n openstack
8.6. Creating Compute nodes and deploying the overcloud
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet
custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet
CRD definition and specification schema:
$ oc describe crd openstackbaremetalset $ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlane
CRD.
Procedure
-
Create your Compute nodes by using the
OpenStackBaremetalSet
CRD. For more information, see Creating Compute nodes with the OpenStackBaremetalSet CRD. -
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGenerator
CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.