Chapter 3. Integrating with an existing Red Hat Ceph Storage cluster


Use the procedures and information in this section to integrate Red Hat OpenStack Platform (RHOSP) with an existing Red Hat Ceph Storage cluster. You can create custom environment files to override and provide values for configuration options within OpenStack components.

3.1. Creating a custom environment file

Director supplies parameters to tripleo-ansible to integrate with an external Red Hat Ceph Storage cluster through the environment file:

  • /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml

If you deploy the Shared File Systems service (manila) with external CephFS, separate environment files supply additional parameters.

  • For native CephFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml.
  • For CephFS-NFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml.

To configure integration of an existing Ceph Storage cluster with the overcloud, you must supply the details of your Ceph Storage cluster to director by using a custom environment file. Director invokes these environment files during deployment.

Procedure

  1. Create a custom environment file:

    /home/stack/templates/ceph-config.yaml

  2. Add a parameter_defaults: section to the file:

    parameter_defaults:
  3. Use parameter_defaults to set all of the parameters that you want to override in /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml. You must set the following parameters at a minimum:

    • CephClientKey: The Ceph client key for the client.openstack user in your Ceph Storage cluster. This is the value of key that you retrieved in Configuring the existing Ceph Storage cluster. For example, AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==.
    • CephClusterFSID: The file system ID of your Ceph Storage cluster. This is the value of fsid in your Ceph Storage cluster configuration file, which you retrieved in Configuring the existing Ceph Storage cluster. For example, 4b5c8c0a-ff60-454b-a1b4-9747aa737d19.
    • CephExternalMonHost: A comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example, 172.16.1.7, 172.16.1.8.

      For example:

      parameter_defaults:
        CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==>
        CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19>
        CephExternalMonHost: <172.16.1.7, 172.16.1.8, 172.16.1.9>
  4. Optional: You can override the Red Hat OpenStack Platform (RHOSP) client username and the following default pool names to match your Ceph Storage cluster:

    • CephClientUserName: <openstack>
    • NovaRbdPoolName: <vms>
    • CinderRbdPoolName: <volumes>
    • GlanceRbdPoolName: <images>
    • CinderBackupRbdPoolName: <backups>
  5. Optional: If you are deploying the Shared File Systems service with CephFS, you can override the following default data and metadata pool names:

      ManilaCephFSDataPoolName: <manila_data>
      ManilaCephFSMetadataPoolName: <manila_metadata>
    Note

    Ensure that these names match the names of the pools you created.

  6. Set the client key that you created for the Shared File Systems service. You can override the default Ceph client username for that key:

      ManilaCephFSCephFSAuthId: <manila>
      CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
    Note

    The default client username ManilaCephFSCephFSAuthId is manila, unless you override it. CephManilaClientKey is always required.

After you create the custom environment file, you must include it when you deploy the overcloud.

Additional resources

You must have a Ceph Storage container to configure Red Hat Openstack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha. You do not require a Ceph Storage container if the external Ceph Storage cluster only provides Block (through RBD), Object (through RGW), or File (through native CephFS) storage.

RHOSP 17.1 will deploy Red Hat Ceph Storage 6.x (Ceph package 17.x). The Ceph Storage 6.x containers are hosted at registry.redhat.io, a registry that requires authentication. For more information, see Container image preparation parameters.

3.3. Deploying the overcloud

Deploy the overcloud with the custom environment file that you created in Creating a custom environment file.

Important

When you use Red Hat OpenStack Platform (RHOSP) with Red Hat Ceph Storage, it is important to understand the impact of changing Ceph monitor IP addresses. The Ceph Monitor service is typically assigned IP addresses from the Storage network. These Ceph Monitor service IP addresses are associated with VM instances where Red Hat Ceph Storage is used. They are not dynamically updated if the Ceph Monitor service IP address changes because of a hardware replacement. This could result in a storage outage, especially if multiple Ceph Monitor nodes are replaced. Each VM instance would have to be migrated, rebooted, or shelved and unshelved to resolve the IP address change and the resulting outage.

Reuse the IP addresses of the removed Ceph Monitor service instances instead of using new IP addresses to avoid this situation.

Procedure

  • Deploy the overcloud with openstack overcloud deploy command and the following additional arguments:

    Example:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
      -e /home/stack/templates/ceph-config.yaml \
      -e --ntp-server pool.ntp.org \
      ...

    This example command uses the following options:

    • --templates - Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/.
    • -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml - Sets the director to integrate an existing Ceph Storage cluster to the overcloud.
    • -e /home/stack/templates/ceph-config.yaml - Adds a custom environment file to override the defaults set by -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml.
    • --ntp-server pool.ntp.org - Sets the NTP server.

If you deploy an overcloud that uses the Shared File Systems service (manila) with CephFS, you must add additional environment files.

Procedure

  1. Create and add additional environment files:

    • If you deploy an overcloud that uses the native CephFS back-end driver, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml.
    • If you deploy an overcloud that uses CephFS-NFS, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml.

      Red Hat recommends that you deploy the CephFS-NFS driver with an isolated StorageNFS network where shares are exported. You must deploy the isolated network to overcloud Controller nodes. For information about creating the StorageNFS network and updating the roles, see Composable networks in Customizing your Red Hat OpenStack Platform deployment.

  2. Modify the openstack overcloud deploy command depending on the CephFS back end that you use.

    • For native CephFS:

       $ openstack overcloud deploy --templates \
         -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
         -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml \
         -e /home/stack/templates/ceph-config.yaml \
         -e --ntp-server pool.ntp.org
         ...
    • For CephFS-NFS:

       $ openstack overcloud deploy --templates \
           -r /home/stack/custom_roles.yaml \
           -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
           -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml \
           -e /home/stack/templates/ceph-config.yaml \
           -e /home/stack/templates/overcloud-networks-deployed.yaml\
           -e /home/stack/templates/overcloud-vip-deployed.yaml \
           -e --ntp-server pool.ntp.org
           ...
      Note

      The custom ceph-config.yaml environment file overrides parameters in the external-ceph.yaml file and either the manila-cephfsnative-config.yaml file or the manila-cephfsganesha-config.yaml file. Therefore, include the custom ceph-config.yaml environment file in the deployment command after external-ceph.yaml and either manila-cephfsnative-config.yaml or manila-cephfsganesha-config.yaml.

      Example environment file

      parameter_defaults:
          CinderEnableIscsiBackend: false
          CinderEnableRbdBackend: true
          CinderEnableNfsBackend: false
          NovaEnableRbdBackend: true
          GlanceBackend: rbd
          CinderRbdPoolName: "volumes"
          NovaRbdPoolName: "vms"
          GlanceRbdPoolName: "images"
          CinderBackupRbdPoolName: "backups"
          CephClusterFSID: <cluster_ID>
          CephExternalMonHost: <IP_address>,<IP_address>,<IP_address>
          CephClientKey: "<client_key>"
          CephClientUserName: "openstack"
          ManilaCephFSDataPoolName: manila_data
          ManilaCephFSMetadataPoolName: manila_metadata
          ManilaCephFSCephFSAuthId: 'manila'
          CephManilaClientKey: '<client_key>'
          ExtraConfig:

      • Replace <cluster_ID>, <IP_address>, and <client_key> with values that are suitable for your environment.

Additional resources

If you deploy an overcloud that uses an external Red Hat Ceph Storage Object Gateway (RGW) for Object storage, you must add an additional environment file to connect to the RGW instance.

Procedure

  1. Create a new custom environment file to define the RGW connection.
  2. Add the following parameters to the custom environment file:

    parameter_defaults:
       ExternalSwiftPublicUrl: <public_rgw_endpoint_url>
       ExternalSwiftInternalUrl: <internal_rgw_endpoint_url>
       ExternalSwiftAdminUrl: <admin_rgw_endpoint_url>
       ExternalSwiftUserTenant: 'service'
       SwiftPassword: <swift_password>
    • Replace <public_rgw_endpoint_url> with an HTTP formatted URL that represents the public endpoint where the external RGW instance is listening for connections. By default, the external RGW instance listens on port 8080. Confirm your deployment uses this port.
    • Replace <internal_rgw_endpoint_url> with an HTTP formatted URL that represents the internal endpoint where the external RGW instance is listening for connections. By default, the external RGW instance listens on port 8080. Confirm your deployment uses this port.
    • Replace <admin_rgw_endpoint_url> with an HTTP formatted URL that represents the administrative endpoint where the external RGW instance is listening for connections. By default, the external RGW instance listens on port 8080. Confirm your deployment uses this port.
    • Replace <swift_password> with the Object Storage service (swift) password.

      Note

      The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password.

  3. Save the new custom environment file.
  4. Add the following parameters to the Red Hat Ceph Storage configuration file to configure RGW to use the Identity service:

        rgw_keystone_api_version = 3
        rgw_keystone_url = <public_keystone_endpoint_url>
        rgw_keystone_accepted_roles = member, Member, admin
        rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator
        rgw_keystone_admin_domain = default
        rgw_keystone_admin_project = service
        rgw_keystone_admin_user = swift
        rgw_keystone_admin_password = <swift_admin_password>
        rgw_keystone_implicit_tenants = true
        rgw_s3_auth_use_keystone = true
        rgw_swift_versioning_enabled = true
        rgw_swift_account_in_url = true
        rgw_max_attr_name_len = 128
        rgw_max_attrs_num_in_req = 90
        rgw_max_attr_size = 256
        rgw_keystone_verify_ssl = false
        rgw_keystone_revocation_interval: '0'
        rgw_swift_enforce_content_length: 'true'
        rgw_trust_forwarded_https: 'true'
        rgw_max_attr_name_len: 128
        rgw_max_attrs_num_in_req: 90
        rgw_max_attr_size: 1024
        rgw_keystone_accepted_reader_roles: 'SwiftSystemReader'
    • Replace <public_keystone_endpoint_url> with an HTTP formatted URL that represents the public endpoint where the Identity service is listening for connections. By default, the Identity service listens on port 5000. Confirm your deployment uses this port.
    • Replace <swift_admin_password> with the Object Storage service password defined in your custom environment file.

      Note

      Director creates the following roles and users in the Identity service by default:

      • rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
      • rgw_keystone_admin_domain: default
      • rgw_keystone_admin_project: service
      • rgw_keystone_admin_user: swift
  5. Save the changes to the Red Hat Ceph Storage configuration file.
  6. Update your deployment with the custom environment file:

    openstack overcloud deploy --templates \
    -e <existing_overcloud_environment_files>
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml
    -e <external_rgw_environment_file>
    • Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment.
    • Replace <external_rgw_environment_file> with the name of the custom environment file created during this procedure.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top