Chapter 5. Configuring Red Hat OpenStack Platform director Operator for Service Telemetry Framework


To collect metrics, events, or both, and to send them to the Service Telemetry Framework (STF) storage domain, you must configure the Red Hat OpenStack Platform (RHOSP) overcloud to enable data collection and transport.

STF can support both single and multiple clouds. The default configuration in RHOSP and STF set up for a single cloud installation.

When you deploy the Red Hat OpenStack Platform (RHOSP) overcloud deployment using director Operator, you must configure the data collectors and the data transport for Service Telemetry Framework (STF).

Prerequisites

  • You are familiar with deploying and managing RHOSP with the RHOSP director Operator.

Additional resources

When you configure the Red Hat OpenStack Platform (RHOSP) overcloud for Service Telemetry Framework (STF), you must provide the AMQ Interconnect route address in the STF connection file.

Procedure

  1. Log in to your Red Hat OpenShift Container Platform environment where STF is hosted.
  2. Change to the service-telemetry project:

    $ oc project service-telemetry
    Copy to Clipboard Toggle word wrap
  3. Retrieve the AMQ Interconnect route address:

    $ oc get routes -ogo-template='{{ range .items }}{{printf "%s\n" .spec.host }}{{ end }}' | grep "\-5671"
    default-interconnect-5671-service-telemetry.apps.infra.watch
    Copy to Clipboard Toggle word wrap

Edit the heat-env-config-deploy ConfigMap to add the base Service Telemetry Framework (STF) configuration to the overcloud nodes.

Procedure

  1. Log in to the Red Hat OpenShift Container Platform environment where RHOSP director Operator is deployed and change to the project that hosts your RHOSP deployment:

    $ oc project openstack
    Copy to Clipboard Toggle word wrap
  2. Open the heat-env-config-deploy ConfigMap CR for editing:

    $ oc edit heat-env-config-deploy
    Copy to Clipboard Toggle word wrap
  3. Add the enable-stf.yaml configuration to the heat-env-config-deploy ConfigMap, save your edits and close the file:

    enable-stf.yaml

    apiVersion: v1
    data:
      [...]
      enable-stf.yaml: |
        parameter_defaults:
            # only send to STF, not other publishers
            PipelinePublishers: []
    
            # manage the polling and pipeline configuration files for Ceilometer agents
            ManagePolling: true
            ManagePipeline: true
            ManageEventPipeline: false
    
            # enable Ceilometer metrics and events
            CeilometerQdrPublishMetrics: true
    
            # enable collection of API status
            CollectdEnableSensubility: true
            CollectdSensubilityTransport: amqp1
    
            # enable collection of containerized service metrics
            CollectdEnableLibpodstats: true
    
            # set collectd overrides for higher telemetry resolution and extra plugins
            # to load
            CollectdConnectionType: amqp1
            CollectdAmqpInterval: 30
            CollectdDefaultPollingInterval: 30
            # to collect information about the virtual memory subsystem of the kernel
            # CollectdExtraPlugins:
            # - vmem
    
            # set standard prefixes for where metrics are published to QDR
            MetricsQdrAddresses:
            - prefix: 'collectd'
              distribution: multicast
            - prefix: 'anycast/ceilometer'
              distribution: multicast
    
            ExtraConfig:
               ceilometer::agent::polling::polling_interval: 30
               ceilometer::agent::polling::polling_meters:
               - cpu
               - memory.usage
    
               # to avoid filling the memory buffers if disconnected from the message bus
               # note: this may need an adjustment if there are many metrics to be sent.
               collectd::plugin::amqp1::send_queue_limit: 5000
    
               # to receive extra information about virtual memory, you must enable vmem plugin in CollectdExtraPlugins
               # collectd::plugin::vmem::verbose: true
    
               # provide name and uuid in addition to hostname for better correlation
               # to ceilometer data
               collectd::plugin::virt::hostname_format: "name uuid hostname"
    
               # to capture all extra_stats metrics, comment out below config
               collectd::plugin::virt::extra_stats: cpu_util vcpu disk
    
               # provide the human-friendly name of the virtual instance
               collectd::plugin:ConfigMap :virt::plugin_instance_format: metadata
    
               # set memcached collectd plugin to report its metrics by hostname
               # rather than host IP, ensuring metrics in the dashboard remain uniform
               collectd::plugin::memcached::instances:
                 local:
                   host: "%{hiera('fqdn_canonical')}"
                   port: 11211
    
               # report root filesystem storage metrics
               collectd::plugin::df::ignoreselected: false
    Copy to Clipboard Toggle word wrap

Edit the heat-env-config-deploy ConfigMap to create a connection from Red Hat OpenStack Platform (RHOSP) to Service Telemetry Framework.

Procedure

  1. Log in to the Red Hat OpenShift Container Platform environment where RHOSP director Operator is deployed and change to the project that hosts your RHOSP deployment:

    $ oc project openstack
    Copy to Clipboard Toggle word wrap
  2. Open the heat-env-config-deploy ConfigMap for editing:

    $ oc edit configmap heat-env-config-deploy
    Copy to Clipboard Toggle word wrap
  3. Add your stf-connectors.yaml configuration to the heat-env-config-deploy ConfigMap, appropriate to your environment, save your edits and close the file:

    stf-connectors.yaml

    apiVersion: v1
    data:
      [...]
      stf-connectors.yaml: |
        resource_registry:
          OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml
    
        parameter_defaults:
            MetricsQdrConnectors:
                - host: default-interconnect-5671-service-telemetry.apps.ostest.test.metalkube.org
                  port: 443
                  role: edge
                  verifyHostname: false
                  sslProfile: sslProfile
                  saslUsername: guest@default-interconnect
                  saslPassword: <password_from_stf>
    
            MetricsQdrSSLProfiles:
                - name: sslProfile
    
            CeilometerQdrMetricsConfig:
                driver: amqp
                topic: cloud1-metering
    
            CollectdAmqpInstances:
                cloud1-telemetry:
                    format: JSON
                    presettle: false
    
            CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry
    Copy to Clipboard Toggle word wrap

    • The resource_registry configuration directly loads the collectd service because you do not include the collectd-write-qdr.yaml environment file for multiple cloud deployments.
    • Replace the host sub-parameter of MetricsQdrConnectors with the value that you retrieved in Section 4.1.2, “Retrieving the AMQ Interconnect route address”.
    • Replace the <password_from_stf> portion of the saslPassword sub-parameter of MetricsQdrConnectors with the value you retrieved in Section 4.1.1, “Retrieving the AMQ Interconnect password”.
    • Set topic value of CeilometerQdrMetricsConfig.topic to define the topic for Ceilometer metrics. The value is a unique topic identifier for the cloud such as cloud1-metering.
    • Set CollectdAmqpInstances sub-parameter to define the topic for collectd metrics. The section name is a unique topic identifier for the cloud such as cloud1-telemetry.
    • Set CollectdSensubilityResultsChannel to define the topic for collectd-sensubility events. The value is a unique topic identifier for the cloud such as sensubility/cloud1-telemetry.

Deploy or update the overcloud with the required environment files so that data is collected and transmitted to Service Telemetry Framework (STF).

Procedure

  1. Log in to the Red Hat OpenShift Container Platform environment where RHOSP director Operator is deployed and change to the project that hosts your RHOSP deployment:

    $ oc project openstack
    Copy to Clipboard Toggle word wrap
  2. Open the OpenStackConfigGenerator custom resource for editing:

    $ oc edit OpenStackConfigGenerator
    Copy to Clipboard Toggle word wrap
  3. Add the metrics/ceilometer-write-qdr.yaml and metrics/qdr-edge-only.yaml environment files as values for the heatEnvs parameter. Save your edits, and close the OpenStackConfigGenerator custom resource:

    Note

    If you already deployed a Red Hat OpenStack Platform environment using director Operator, you must delete the existing OpenStackConfigGenerator and create a new object with the full configuration in order to re-generate the OpenStackConfigVersion.

    OpenStackConfigGenerator

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackConfigGenerator
    metadata:
      name: default
      namespace: openstack
    spec:
      heatEnvConfigMap: heat-env-config-deploy
      heatEnvs:
      - <existing_environment_file_references>
      - metrics/ceilometer-write-qdr.yaml
      - metrics/qdr-edge-only.yaml
    Copy to Clipboard Toggle word wrap

  4. If you already deployed a Red Hat OpenStack Platform environment using director Operator and generated a new OpenStackConfigVersion, edit the OpenStackDeploy object of your deployment, and set the value of spec.configVersion to the new OpenStackConfigVersion in order to update the overcloud deployment.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat