Chapter 8. Deploying RHOSP with an external Red Hat Ceph Storage cluster with director Operator
You can use director Operator (OSPdO) to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.
Prerequisites
- You have an external Red Hat Ceph Storage cluster.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfigcustom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMapsto store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGeneratorcustom resource to render Ansible playbooks for overcloud configuration.
8.1. Configuring networking for the Compute role in director Operator Copy linkLink copied to clipboard!
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute role.
Procedure
Create a directory for your custom templates:
mkdir custom_templates
$ mkdir custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a custom template file named
multiple_nics_vlans_dvr.j2in yourcustom_templatesdirectory. -
Add configuration for the NICs of your bare-metal Compute nodes to your
multiple_nics_vlans_dvr.j2file. For an example NIC configuration file, see Custom NIC heat template for Compute nodes. Create a directory for your custom environment files:
mkdir custom_environment_files
$ mkdir custom_environment_filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Map the NIC template for your overcloud role in the
network-environment.yamlenvironment file in yourcustom_environment_filesdirectory:parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2. Custom NIC heat template for Compute nodes Copy linkLink copied to clipboard!
The following example is a heat template that contains NIC configuration for the Compute bare-metal nodes in an overcloud that connects to an external Red Hat Ceph Storage cluster. The configuration in the heat template maps the networks to the following bridges and interfaces:
| Networks | Bridge | interface |
|---|---|---|
| Control Plane, Storage, Internal API | N/A |
|
| External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2 in your custom_templates directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
8.3. Adding custom templates to the overcloud configuration Copy linkLink copied to clipboard!
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
cd ~/custom_templates
$ cd ~/custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Archive the templates into a gzipped tarball:
tar -cvzf custom-config.tar.gz *.yaml
$ tar -cvzf custom-config.tar.gz *.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
tripleo-tarball-config ConfigMapCR and use the tarball as data:oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
ConfigMapCR is created:oc get configmap/tripleo-tarball-config -n openstack
$ oc get configmap/tripleo-tarball-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Custom environment file for configuring external Ceph Storage usage in director Operator Copy linkLink copied to clipboard!
To integrate with an external Red Hat Ceph Storage cluster, include an environment file with parameters and values similar to those shown in the following example. The example enables the CephExternal and CephClient services on your overcloud nodes, and sets the pools for different RHOSP services.
You can modify this configuration to suit your storage configuration.
To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml in your custom_environment_files directory on your workstation.
8.5. Adding custom environment files to the overcloud configuration Copy linkLink copied to clipboard!
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format:
...
data:
<environment_file_name>: |+
<environment_file_contents>
...
data:
<environment_file_name>: |+
<environment_file_contents>
For example, the following ConfigMap contains two environment files:
Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMapobject:oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<dir_custom_environment_files>with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMapobject stores these as individualdataentries.
-
Replace
Verify that the
heat-env-config ConfigMapobject contains all the required environment files:oc get configmap/heat-env-config -n openstack
$ oc get configmap/heat-env-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6. Creating Compute nodes and deploying the overcloud Copy linkLink copied to clipboard!
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema:
oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec
$ oc describe crd openstackbaremetalset
$ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfigCR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlaneCRD.
Procedure
-
Create your Compute nodes by using the
OpenStackBaremetalSetCRD. For more information, see Creating Compute nodes with the OpenStackBaremetalSet CRD. -
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGeneratorCRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.