Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 8. Deploying RHOSP with an external Red Hat Ceph Storage cluster with director Operator

download PDF

You can use director Operator (OSPdO) to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.

Prerequisites

  • You have an external Red Hat Ceph Storage cluster.
  • You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
  • You have created the overcloud networks by using the OpenStackNetConfig custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator.
  • You have created ConfigMaps to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator.
  • You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
  • You have created and applied an OpenStackConfigGenerator custom resource to render Ansible playbooks for overcloud configuration.

8.1. Configuring networking for the Compute role in director Operator

Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute role.

Procedure

  1. Create a directory for your custom templates:

    $ mkdir custom_templates
  2. Create a custom template file named multiple_nics_vlans_dvr.j2 in your custom_templates directory.
  3. Add configuration for the NICs of your bare-metal Compute nodes to your multiple_nics_vlans_dvr.j2 file. For an example NIC configuration file, see Custom NIC heat template for Compute nodes.
  4. Create a directory for your custom environment files:

    $ mkdir custom_environment_files
  5. Map the NIC template for your overcloud role in the network-environment.yaml environment file in your custom_environment_files directory:

    parameter_defaults:
      ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'

8.2. Custom NIC heat template for Compute nodes

The following example is a heat template that contains NIC configuration for the Compute bare-metal nodes in an overcloud that connects to an external Red Hat Ceph Storage cluster. The configuration in the heat template maps the networks to the following bridges and interfaces:

NetworksBridgeinterface

Control Plane, Storage, Internal API

N/A

nic3

External, Tenant

br-ex

nic4

To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2 in your custom_templates directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.

Example

{% set mtu_list = [ctlplane_mtu] %}
{% for network in role_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
# BMH provisioning interface used for ctlplane
- type: interface
  name: nic1
  mtu: 1500
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
  routes: {{ ctlplane_host_routes }}
# Disable OCP cluster interface
- type: interface
  name: nic2
  mtu: 1500
  use_dhcp: false
{% for network in networks_all if network not in networks_skip_config|default([]) %}
{% if network == 'External' %}
- type: ovs_bridge
  name: {{ neutron_physical_bridge_name }}
  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
  dns_servers: {{ ctlplane_dns_nameservers }}
  use_dhcp: false
{% if network in role_networks %}
  addresses:
  - ip_netmask:
      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endif %}
  members:
  - type: interface
    name: nic3
    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
    primary: true
{% endif %}
{% endfor %}
- type: ovs_bridge
  name: br-tenant
  mtu: {{ min_viable_mtu }}
  use_dhcp: false
  members:
  - type: interface
    name: nic4
    mtu: {{ min_viable_mtu }}
    use_dhcp: false
    primary: true
{% for network in networks_all if network not in networks_skip_config|default([]) %}
{% if network not in ["External"] and network in role_networks %}
  - type: vlan
    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
    vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
    addresses:
    - ip_netmask:
        {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
    routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endif %}
{% endfor %}

8.3. Adding custom templates to the overcloud configuration

Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.

Note

All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.

Prerequisites

  • The custom overcloud templates that you want to apply to provisioned nodes.

Procedure

  1. Navigate to the location of your custom templates:

    $ cd ~/custom_templates
  2. Archive the templates into a gzipped tarball:

    $ tar -cvzf custom-config.tar.gz *.yaml
  3. Create the tripleo-tarball-config ConfigMap CR and use the tarball as data:

    $ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
  4. Verify that the ConfigMap CR is created:

    $ oc get configmap/tripleo-tarball-config -n openstack

8.4. Custom environment file for configuring external Ceph Storage usage in director Operator

To integrate with an external Red Hat Ceph Storage cluster, include an environment file with parameters and values similar to those shown in the following example. The example enables the CephExternal and CephClient services on your overcloud nodes, and sets the pools for different RHOSP services.

Note

You can modify this configuration to suit your storage configuration.

To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml in your custom_environment_files directory on your workstation.

resource_registry:
  OS::TripleO::Services::CephExternal: deployment/cephadm/ceph-client.yaml

parameter_defaults:
  CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' 1
  CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' 2
  CephExternalMonHost: '172.16.1.7, 172.16.1.8' 3
  ExternalCeph: true

  # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova
  NovaEnableRbdBackend: true
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  GlanceBackend: rbd
  # Uncomment below if enabling legacy telemetry
  # GnocchiBackend: rbd
  # If the Ceph pools which host VMs, Volumes and Images do not match these
  # names OR the client keyring to use is not named 'openstack',  edit the
  # following as needed.
  NovaRbdPoolName: vms
  CinderRbdPoolName: volumes
  CinderBackupRbdPoolName: backups
  GlanceRbdPoolName: images
  # Uncomment below if enabling legacy telemetry
  # GnocchiRbdPoolName: metrics
  CephClientUserName: openstack

  # finally we disable the Cinder LVM backend
  CinderEnableIscsiBackend: false
1
The file system ID of your external Red Hat Ceph Storage cluster.
2
The Red Hat Ceph Storage client key for your external Red Hat Ceph Storage cluster.
3
A comma-delimited list of the IPs of all MON hosts in your external Red Hat Ceph Storage cluster.

8.5. Adding custom environment files to the overcloud configuration

To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format:

...
data:
  <environment_file_name>: |+
    <environment_file_contents>

For example, the following ConfigMap contains two environment files:

...
data:
  network_environment.yaml: |+
    parameter_defaults:
      ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
  cloud_name.yaml: |+
    parameter_defaults:
      CloudDomain: ocp4.example.com
      CloudName: overcloud.ocp4.example.com
      CloudNameInternal: overcloud.internalapi.ocp4.example.com
      CloudNameStorage: overcloud.storage.ocp4.example.com
      CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com
      CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com

Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment.

Prerequisites

  • The custom environment files for your overcloud deployment.

Procedure

  1. Create the heat-env-config ConfigMap object:

    $ oc create configmap -n openstack heat-env-config \
     --from-file=~/<dir_custom_environment_files>/ \
     --dry-run=client -o yaml | oc apply -f -
    • Replace <dir_custom_environment_files> with the directory that contains the environment files you want to use in your overcloud deployment. The ConfigMap object stores these as individual data entries.
  2. Verify that the heat-env-config ConfigMap object contains all the required environment files:

    $ oc get configmap/heat-env-config -n openstack

8.6. Creating Compute nodes and deploying the overcloud

Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.

Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.

Tip

Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema:

$ oc describe crd openstackbaremetalset

$ oc explain openstackbaremetalset.spec

Prerequisites

  • You have used the OpenStackNetConfig CR to create a control plane network and any additional isolated networks.
  • You have created a control plane with the OpenStackControlPlane CRD.

Procedure

  1. Create your Compute nodes by using the OpenStackBaremetalSet CRD. For more information, see Creating Compute nodes with the OpenStackBaremetalSet CRD.
  2. Create the Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD.
  3. Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
  4. Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.