Red Hat build of OpenTelemetry


OpenShift Container Platform 4.17

Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform

Red Hat OpenShift Documentation Team

Abstract

Use the Red Hat build of the open source OpenTelemetry project to collect unified, standardized, and vendor-neutral telemetry data for cloud-native software in OpenShift Container Platform.

Chapter 1. Release notes for the Red Hat build of OpenTelemetry

1.1. Red Hat build of OpenTelemetry overview

Red Hat build of OpenTelemetry is based on the open source OpenTelemetry project, which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat build of OpenTelemetry product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.

The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.

The OpenTelemetry Collector has a number of features including the following:

Data Collection and Processing Hub
It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure.
Customizable telemetry data pipeline
The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers.
Auto-instrumentation features
Automatic instrumentation simplifies the process of adding observability to applications. Developers do not need to manually instrument their code for basic telemetry data.

Here are some of the use cases for the OpenTelemetry Collector:

Centralized data collection
In a microservices architecture, the Collector can be deployed to aggregate data from multiple services.
Data enrichment and processing
Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data.
Multi-backend receiving and exporting
The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously.

You can use the Red Hat build of OpenTelemetry in combination with the Red Hat OpenShift distributed tracing platform (Tempo).

Note

Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat’s support.

1.2. Release notes for Red Hat build of OpenTelemetry 3.4

The Red Hat build of OpenTelemetry 3.4 is provided through the Red Hat build of OpenTelemetry Operator 0.113.0.

The Red Hat build of OpenTelemetry 3.4 is based on the open source OpenTelemetry release 0.113.0.

1.2.1. Technology Preview features

This update introduces the following Technology Preview features:

  • OpenTelemetry Protocol (OTLP) JSON File Receiver
  • Count Connector
Important

Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.2.2. New features and enhancements

This update introduces the following enhancements:

  • The following Technology Preview features reach General Availability:

    • BearerTokenAuth Extension
    • Kubernetes Attributes Processor
    • Spanmetrics Connector
  • You can use the instrumentation.opentelemetry.io/inject-sdk annotation with the Instrumentation custom resource to enable injection of the OpenTelemetry SDK environment variables into multi-container pods.

1.2.3. Removal notice

  • In the Red Hat build of OpenTelemetry 3.4, the Logging Exporter has been removed from the Collector. As an alternative, you must use the Debug Exporter instead.

    Warning

    If you have the Logging Exporter configured, upgrading to the Red Hat build of OpenTelemetry 3.4 will cause crash loops. To avoid such issues, you must configure the Red Hat build of OpenTelemetry to use the Debug Exporter instead of the Logging Exporter before upgrading to the Red Hat build of OpenTelemetry 3.4.

  • In the Red Hat build of OpenTelemetry 3.4, the Technology Preview Memory Ballast Extension has been removed. As an alternative, you can use the GOMEMLIMIT environment variable instead.

1.3. Release notes for Red Hat build of OpenTelemetry 3.3.1

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

The Red Hat build of OpenTelemetry 3.3.1 is based on the open source OpenTelemetry release 0.107.0.

1.3.1. Bug fixes

This update introduces the following bug fix:

  • Before this update, injection of the NGINX auto-instrumentation failed when copying the instrumentation libraries into the application container. With this update, the copy command is configured correctly, which fixes the issue. (TRACING-4673)

1.4. Release notes for Red Hat build of OpenTelemetry 3.3

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

The Red Hat build of OpenTelemetry 3.3 is based on the open source OpenTelemetry release 0.107.0.

1.4.1. CVEs

This release fixes the following CVEs:

1.4.2. Technology Preview features

This update introduces the following Technology Preview features:

  • Group-by-Attributes Processor
  • Transform Processor
  • Routing Connector
  • Prometheus Remote Write Exporter
  • Exporting logs to the LokiStack log store
Important

Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.4.3. New features and enhancements

This update introduces the following enhancements:

  • Collector dashboard for the internal Collector metrics and analyzing Collector health and performance. (TRACING-3768)
  • Support for automatically reloading certificates in both the OpenTelemetry Collector and instrumentation. (TRACING-4186)

1.4.4. Bug fixes

This update introduces the following bug fixes:

  • Before this update, the ServiceMonitor object was failing to scrape operator metrics due to missing permissions for accessing the metrics endpoint. With this update, this issue is fixed by creating the ServiceMonitor custom resource when operator monitoring is enabled. (TRACING-4288)
  • Before this update, the Collector service and the headless service were both monitoring the same endpoints, which caused duplication of metrics collection and ServiceMonitor objects. With this update, this issue is fixed by not creating the headless service. (OBSDA-773)

1.5. Release notes for Red Hat build of OpenTelemetry 3.2.2

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

1.5.1. CVEs

This release fixes the following CVEs:

1.5.2. Bug fixes

This update introduces the following bug fix:

  • Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. (TRACING-4435)

1.6. Release notes for Red Hat build of OpenTelemetry 3.2.1

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

1.6.1. CVEs

This release fixes the following CVEs:

1.6.2. New features and enhancements

This update introduces the following enhancement:

  • Red Hat build of OpenTelemetry 3.2.1 is based on the open source OpenTelemetry release 0.102.1.

1.7. Release notes for Red Hat build of OpenTelemetry 3.2

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

1.7.1. Technology Preview features

This update introduces the following Technology Preview features:

  • Host Metrics Receiver
  • OIDC Auth Extension
  • Kubernetes Cluster Receiver
  • Kubernetes Events Receiver
  • Kubernetes Objects Receiver
  • Load-Balancing Exporter
  • Kubelet Stats Receiver
  • Cumulative to Delta Processor
  • Forward Connector
  • Journald Receiver
  • Filelog Receiver
  • File Storage Extension
Important

Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.7.2. New features and enhancements

This update introduces the following enhancement:

  • Red Hat build of OpenTelemetry 3.2 is based on the open source OpenTelemetry release 0.100.0.

1.7.3. Deprecated functionality

In Red Hat build of OpenTelemetry 3.2, use of empty values and null keywords in the OpenTelemetry Collector custom resource is deprecated and planned to be unsupported in a future release. Red Hat will provide bug fixes and support for this syntax during the current release lifecycle, but this syntax will become unsupported. As an alternative to empty values and null keywords, you can update the OpenTelemetry Collector custom resource to contain empty JSON objects as open-closed braces {} instead.

1.7.4. Bug fixes

This update introduces the following bug fix:

  • Before this update, the checkbox to enable Operator monitoring was not available in the web console when installing the Red Hat build of OpenTelemetry Operator. As a result, a ServiceMonitor resource was not created in the openshift-opentelemetry-operator namespace. With this update, the checkbox appears for the Red Hat build of OpenTelemetry Operator in the web console so that Operator monitoring can be enabled during installation. (TRACING-3761)

1.8. Release notes for Red Hat build of OpenTelemetry 3.1.1

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

1.8.1. CVEs

This release fixes CVE-2023-39326.

1.9. Release notes for Red Hat build of OpenTelemetry 3.1

The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.

1.9.1. Technology Preview features

This update introduces the following Technology Preview feature:

  • The target allocator is an optional component of the OpenTelemetry Operator that shards Prometheus receiver scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator provides integration with the Prometheus PodMonitor and ServiceMonitor custom resources.
Important

The target allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.9.2. New features and enhancements

This update introduces the following enhancement:

  • Red Hat build of OpenTelemetry 3.1 is based on the open source OpenTelemetry release 0.93.0.

1.10. Release notes for Red Hat build of OpenTelemetry 3.0

1.10.1. New features and enhancements

This update introduces the following enhancements:

  • Red Hat build of OpenTelemetry 3.0 is based on the open source OpenTelemetry release 0.89.0.
  • The OpenShift distributed tracing data collection Operator is renamed as the Red Hat build of OpenTelemetry Operator.
  • Support for the ARM architecture.
  • Support for the Prometheus receiver for metrics collection.
  • Support for the Kafka receiver and exporter for sending traces and metrics to Kafka.
  • Support for cluster-wide proxy environments.
  • The Red Hat build of OpenTelemetry Operator creates the Prometheus ServiceMonitor custom resource if the Prometheus exporter is enabled.
  • The Operator enables the Instrumentation custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries.

1.10.2. Removal notice

In Red Hat build of OpenTelemetry 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead.

1.10.3. Bug fixes

This update introduces the following bug fixes:

  • Fixed support for disconnected environments when using the oc adm catalog mirror CLI command.

1.10.4. Known issues

There is currently a known issue:

  • Currently, the cluster monitoring of the Red Hat build of OpenTelemetry Operator is disabled due to a bug (TRACING-3761). The bug is preventing the cluster monitoring from scraping metrics from the Red Hat build of OpenTelemetry Operator due to a missing label openshift.io/cluster-monitoring=true that is required for the cluster monitoring and service monitor object.

    Workaround

    You can enable the cluster monitoring as follows:

    1. Add the following label in the Operator namespace: oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true
    2. Create a service monitor, role, and role binding:

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        name: opentelemetry-operator-controller-manager-metrics-service
        namespace: openshift-opentelemetry-operator
      spec:
        endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          path: /metrics
          port: https
          scheme: https
          tlsConfig:
            insecureSkipVerify: true
        selector:
          matchLabels:
            app.kubernetes.io/name: opentelemetry-operator
            control-plane: controller-manager
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: otel-operator-prometheus
        namespace: openshift-opentelemetry-operator
        annotations:
          include.release.openshift.io/self-managed-high-availability: "true"
          include.release.openshift.io/single-node-developer: "true"
      rules:
      - apiGroups:
        - ""
        resources:
        - services
        - endpoints
        - pods
        verbs:
        - get
        - list
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: otel-operator-prometheus
        namespace: openshift-opentelemetry-operator
        annotations:
          include.release.openshift.io/self-managed-high-availability: "true"
          include.release.openshift.io/single-node-developer: "true"
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: otel-operator-prometheus
      subjects:
      - kind: ServiceAccount
        name: prometheus-k8s
        namespace: openshift-monitoring

1.11. Release notes for Red Hat build of OpenTelemetry 2.9.2

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.9.2 is based on the open source OpenTelemetry release 0.81.0.

1.11.1. CVEs

1.11.2. Known issues

There is currently a known issue:

1.12. Release notes for Red Hat build of OpenTelemetry 2.9.1

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.9.1 is based on the open source OpenTelemetry release 0.81.0.

1.12.1. CVEs

1.12.2. Known issues

There is currently a known issue:

1.13. Release notes for Red Hat build of OpenTelemetry 2.9

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.9 is based on the open source OpenTelemetry release 0.81.0.

1.13.1. New features and enhancements

This release introduces the following enhancements for the Red Hat build of OpenTelemetry:

  • Support OTLP metrics ingestion. The metrics can be forwarded and stored in the user-workload-monitoring via the Prometheus exporter.
  • Support the Operator maturity Level IV, Deep Insights, which enables upgrading and monitoring of OpenTelemetry Collector instances and the Red Hat build of OpenTelemetry Operator.
  • Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS.
  • Collect OpenShift Container Platform resource attributes via the resourcedetection processor.
  • Support the managed and unmanaged states in the OpenTelemetryCollector custom resouce.

1.13.2. Known issues

There is currently a known issue:

1.14. Release notes for Red Hat build of OpenTelemetry 2.8

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.8 is based on the open source OpenTelemetry release 0.74.0.

1.14.1. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.15. Release notes for Red Hat build of OpenTelemetry 2.7

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.7 is based on the open source OpenTelemetry release 0.63.1.

1.15.1. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.16. Release notes for Red Hat build of OpenTelemetry 2.6

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.6 is based on the open source OpenTelemetry release 0.60.

1.16.1. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.17. Release notes for Red Hat build of OpenTelemetry 2.5

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.5 is based on the open source OpenTelemetry release 0.56.

1.17.1. New features and enhancements

This update introduces the following enhancement:

  • Support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator.

1.17.2. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.18. Release notes for Red Hat build of OpenTelemetry 2.4

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.4 is based on the open source OpenTelemetry release 0.49.

1.18.1. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.19. Release notes for Red Hat build of OpenTelemetry 2.3

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.3.1 is based on the open source OpenTelemetry release 0.44.1.

Red Hat build of OpenTelemetry 2.3.0 is based on the open source OpenTelemetry release 0.44.0.

1.19.1. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.20. Release notes for Red Hat build of OpenTelemetry 2.2

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.2 is based on the open source OpenTelemetry release 0.42.0.

1.20.1. Technology Preview features

The unsupported OpenTelemetry Collector components included in the 2.1 release are removed.

1.20.2. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.21. Release notes for Red Hat build of OpenTelemetry 2.1

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.1 is based on the open source OpenTelemetry release 0.41.1.

1.21.1. Technology Preview features

This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples.

CA file configuration for OpenTelemetry version 0.33

spec:
  mode: deployment
  config: |
    exporters:
      jaeger:
        endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
        ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"

CA file configuration for OpenTelemetry version 0.41.1

spec:
  mode: deployment
  config: |
    exporters:
      jaeger:
        endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
        tls:
          ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"

1.21.2. Bug fixes

This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

1.22. Release notes for Red Hat build of OpenTelemetry 2.0

Important

The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat build of OpenTelemetry 2.0 is based on the open source OpenTelemetry release 0.33.0.

This release adds the Red Hat build of OpenTelemetry as a Technology Preview, which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat build of OpenTelemetry. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling.

1.23. Getting support

If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal.

From the Customer Portal, you can:

  • Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
  • Submit a support case to Red Hat Support.
  • Access other product documentation.

To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.

If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.

1.24. Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 2. Installing

Installing the Red Hat build of OpenTelemetry involves the following steps:

  1. Installing the Red Hat build of OpenTelemetry Operator.
  2. Creating a namespace for an OpenTelemetry Collector instance.
  3. Creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance.

2.1. Installing the Red Hat build of OpenTelemetry from the web console

You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console.

Prerequisites

  • You are logged in to the web console as a cluster administrator with the cluster-admin role.
  • For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role.

Procedure

  1. Install the Red Hat build of OpenTelemetry Operator:

    1. Go to OperatorsOperatorHub and search for Red Hat build of OpenTelemetry Operator.
    2. Select the Red Hat build of OpenTelemetry Operator that is provided by Red HatInstallInstallView Operator.

      Important

      This installs the Operator with the default presets:

      • Update channelstable
      • Installation modeAll namespaces on the cluster
      • Installed Namespaceopenshift-operators
      • Update approvalAutomatic
    3. In the Details tab of the installed Operator page, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
  2. Create a project of your choice for the OpenTelemetry Collector instance that you will create in the next step by going to HomeProjectsCreate Project.
  3. Create an OpenTelemetry Collector instance.

    1. Go to OperatorsInstalled Operators.
    2. Select OpenTelemetry CollectorCreate OpenTelemetry CollectorYAML view.
    3. In the YAML view, customize the OpenTelemetryCollector custom resource (CR):

      Example OpenTelemetryCollector CR

      apiVersion: opentelemetry.io/v1alpha1
      kind: OpenTelemetryCollector
      metadata:
        name: otel
        namespace: <project_of_opentelemetry_collector_instance>
      spec:
        mode: deployment
        config: |
          receivers: 1
            otlp:
              protocols:
                grpc:
                http:
            jaeger:
              protocols:
                grpc: {}
                thrift_binary: {}
                thrift_compact: {}
                thrift_http: {}
            zipkin: {}
          processors: 2
            batch: {}
            memory_limiter:
              check_interval: 1s
              limit_percentage: 50
              spike_limit_percentage: 30
          exporters: 3
            debug: {}
          service:
            pipelines:
              traces:
                receivers: [otlp,jaeger,zipkin]
                processors: [memory_limiter,batch]
                exporters: [debug]

      1
      For details, see the "Receivers" page.
      2
      For details, see the "Processors" page.
      3
      For details, see the "Exporters" page.
    4. Select Create.

Verification

  1. Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance.
  2. Go to OperatorsInstalled Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready.
  3. Go to WorkloadsPods to verify that all the component pods of the OpenTelemetry Collector instance are running.

2.2. Installing the Red Hat build of OpenTelemetry by using the CLI

You can install the Red Hat build of OpenTelemetry from the command line.

Prerequisites

  • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.

    Tip
    • Ensure that your OpenShift CLI (oc) version is up to date and matches your OpenShift Container Platform version.
    • Run oc login:

      $ oc login --username=<your_username>

Procedure

  1. Install the Red Hat build of OpenTelemetry Operator:

    1. Create a project for the Red Hat build of OpenTelemetry Operator by running the following command:

      $ oc apply -f - << EOF
      apiVersion: project.openshift.io/v1
      kind: Project
      metadata:
        labels:
          kubernetes.io/metadata.name: openshift-opentelemetry-operator
          openshift.io/cluster-monitoring: "true"
        name: openshift-opentelemetry-operator
      EOF
    2. Create an Operator group by running the following command:

      $ oc apply -f - << EOF
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-opentelemetry-operator
        namespace: openshift-opentelemetry-operator
      spec:
        upgradeStrategy: Default
      EOF
    3. Create a subscription by running the following command:

      $ oc apply -f - << EOF
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: opentelemetry-product
        namespace: openshift-opentelemetry-operator
      spec:
        channel: stable
        installPlanApproval: Automatic
        name: opentelemetry-product
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
    4. Check the Operator status by running the following command:

      $ oc get csv -n openshift-opentelemetry-operator
  2. Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step:

    • To create a project without metadata, run the following command:

      $ oc new-project <project_of_opentelemetry_collector_instance>
    • To create a project with metadata, run the following command:

      $ oc apply -f - << EOF
      apiVersion: project.openshift.io/v1
      kind: Project
      metadata:
        name: <project_of_opentelemetry_collector_instance>
      EOF
  3. Create an OpenTelemetry Collector instance in the project that you created for it.

    Note

    You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster.

    1. Customize the OpenTelemetryCollector custom resource (CR):

      Example OpenTelemetryCollector CR

      apiVersion: opentelemetry.io/v1alpha1
      kind: OpenTelemetryCollector
      metadata:
        name: otel
        namespace: <project_of_opentelemetry_collector_instance>
      spec:
        mode: deployment
        config: |
          receivers: 1
            otlp:
              protocols:
                grpc:
                http:
            jaeger:
              protocols:
                grpc: {}
                thrift_binary: {}
                thrift_compact: {}
                thrift_http: {}
            zipkin: {}
          processors: 2
            batch: {}
            memory_limiter:
              check_interval: 1s
              limit_percentage: 50
              spike_limit_percentage: 30
          exporters: 3
            debug: {}
          service:
            pipelines:
              traces:
                receivers: [otlp,jaeger,zipkin]
                processors: [memory_limiter,batch]
                exporters: [debug]

      1
      For details, see the "Receivers" page.
      2
      For details, see the "Processors" page.
      3
      For details, see the "Exporters" page.
    2. Apply the customized CR by running the following command:

      $ oc apply -f - << EOF
      <OpenTelemetryCollector_custom_resource>
      EOF

Verification

  1. Verify that the status.phase of the OpenTelemetry Collector pod is Running and the conditions are type: Ready by running the following command:

    $ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
  2. Get the OpenTelemetry Collector service by running the following command:

    $ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>

2.3. Using taints and tolerations

To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes using nodeSelector and tolerations in OpenShift 4

2.4. Creating the required RBAC resources automatically

Some Collector components require configuring the RBAC resources.

Procedure

  • Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: generate-processors-rbac
    rules:
    - apiGroups:
      - rbac.authorization.k8s.io
      resources:
      - clusterrolebindings
      - clusterroles
      verbs:
      - create
      - delete
      - get
      - list
      - patch
      - update
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: generate-processors-rbac
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: generate-processors-rbac
    subjects:
    - kind: ServiceAccount
      name: opentelemetry-operator-controller-manager
      namespace: openshift-opentelemetry-operator

2.5. Additional resources

Chapter 3. Configuring the Collector

3.1. Receivers

Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources.

3.1.1. OTLP Receiver

The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP). The OTLP Receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP).

OpenTelemetry Collector custom resource with an enabled OTLP Receiver

# ...
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317 1
            tls: 2
              ca_file: ca.pem
              cert_file: cert.pem
              key_file: key.pem
              client_ca_file: client.pem 3
              reload_interval: 1h 4
          http:
            endpoint: 0.0.0.0:4318 5
            tls: 6

    service:
      pipelines:
        traces:
          receivers: [otlp]
        metrics:
          receivers: [otlp]
# ...

1
The OTLP gRPC endpoint. If omitted, the default 0.0.0.0:4317 is used.
2
The server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
3
The path to the TLS certificate at which the server verifies a client certificate. This sets the value of ClientCAs and ClientAuth to RequireAndVerifyClientCert in the TLSConfig. For more information, see the Config of the Golang TLS package.
4
Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval field accepts a string containing valid units of time such as ns, us (or µs), ms, s, m, h.
5
The OTLP HTTP endpoint. The default value is 0.0.0.0:4318.
6
The server-side TLS configuration. For more information, see the grpc protocol configuration section.

3.1.2. Jaeger Receiver

The Jaeger Receiver ingests traces in the Jaeger formats.

OpenTelemetry Collector custom resource with an enabled Jaeger Receiver

# ...
  config: |
    receivers:
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250 1
          thrift_http:
            endpoint: 0.0.0.0:14268 2
          thrift_compact:
            endpoint: 0.0.0.0:6831 3
          thrift_binary:
            endpoint: 0.0.0.0:6832 4
          tls: 5

    service:
      pipelines:
        traces:
          receivers: [jaeger]
# ...

1
The Jaeger gRPC endpoint. If omitted, the default 0.0.0.0:14250 is used.
2
The Jaeger Thrift HTTP endpoint. If omitted, the default 0.0.0.0:14268 is used.
3
The Jaeger Thrift Compact endpoint. If omitted, the default 0.0.0.0:6831 is used.
4
The Jaeger Thrift Binary endpoint. If omitted, the default 0.0.0.0:6832 is used.
5
The server-side TLS configuration. See the OTLP Receiver configuration section for more details.

3.1.3. Host Metrics Receiver

The Host Metrics Receiver ingests metrics in the OTLP format.

Important

The Host Metrics Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver

apiVersion: v1
kind: ServiceAccount
metadata:
  name: otel-hostfs-daemonset
  namespace: <namespace>
# ...
---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: true
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: null
defaultAddCapabilities:
- SYS_ADMIN
fsGroup:
  type: RunAsAny
groups: []
metadata:
  name: otel-hostmetrics
readOnlyRootFilesystem: true
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
users:
- system:serviceaccount:<namespace>:otel-hostfs-daemonset
volumes:
- configMap
- emptyDir
- hostPath
- projected
# ...
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: <namespace>
spec:
  serviceAccount: otel-hostfs-daemonset
  mode: daemonset
  volumeMounts:
    - mountPath: /hostfs
      name: host
      readOnly: true
  volumes:
    - hostPath:
        path: /
      name: host
  config: |
    receivers:
      hostmetrics:
        collection_interval: 10s 1
        initial_delay: 1s 2
        root_path: / 3
        scrapers: 4
          cpu:
          memory:
          disk:
    service:
      pipelines:
        metrics:
          receivers: [hostmetrics]
# ...

1
Sets the time interval for host metrics collection. If omitted, the default value is 1m.
2
Sets the initial time delay for host metrics collection. If omitted, the default value is 1s.
3
Configures the root_path so that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the same root_path value for each instance.
4
Lists the enabled host metrics scrapers. Available scrapers are cpu, disk, load, filesystem, memory, network, paging, processes, and process.

3.1.4. Kubernetes Objects Receiver

The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data.

Important

The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver

apiVersion: v1
kind: ServiceAccount
metadata:
  name: otel-k8sobj
  namespace: <namespace>
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otel-k8sobj
  namespace: <namespace>
rules:
- apiGroups:
  - ""
  resources:
  - events
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "events.k8s.io"
  resources:
  - events
  verbs:
  - watch
  - list
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-k8sobj
subjects:
  - kind: ServiceAccount
    name: otel-k8sobj
    namespace: <namespace>
roleRef:
  kind: ClusterRole
  name: otel-k8sobj
  apiGroup: rbac.authorization.k8s.io
# ...
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel-k8s-obj
  namespace: <namespace>
spec:
  serviceAccount: otel-k8sobj
  image: ghcr.io/os-observability/redhat-opentelemetry-collector/redhat-opentelemetry-collector:main
  mode: deployment
  config: |
    receivers:
      k8sobjects:
        auth_type: serviceAccount
        objects:
          - name: pods 1
            mode: pull 2
            interval: 30s 3
            label_selector: 4
            field_selector: 5
            namespaces: [<namespace>,...] 6
          - name: events
            mode: watch
    exporters:
      debug:
    service:
      pipelines:
        logs:
          receivers: [k8sobjects]
          exporters: [debug]
# ...

1
The Resource name that this receiver observes: for example, pods, deployments, or events.
2
The observation mode that this receiver uses: pull or watch.
3
Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is 1h.
4
The label selector to define targets.
5
The field selector to filter targets.
6
The list of namespaces to collect events from. If omitted, the default value is all.

3.1.5. Kubelet Stats Receiver

The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet’s API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis.

Important

The Kubelet Stats Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver

# ...
config: |
  receivers:
    kubeletstats:
      collection_interval: 20s
      auth_type: "serviceAccount"
      endpoint: "https://${env:K8S_NODE_NAME}:10250"
      insecure_skip_verify: true
  service:
    pipelines:
      metrics:
        receivers: [kubeletstats]
env:
  - name: K8S_NODE_NAME 1
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName
# ...

1
Sets the K8S_NODE_NAME to authenticate to the API.

The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector.

Permissions required by the service account

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otel-collector
rules:
  - apiGroups: ['']
    resources: ['nodes/stats']
    verbs: ['get', 'watch', 'list']
  - apiGroups: [""]
    resources: ["nodes/proxy"] 1
    verbs: ["get"]
# ...

1
The permissions required when using the extra_metadata_labels or request_utilization or limit_utilization metrics.

3.1.6. Prometheus Receiver

The Prometheus Receiver scrapes the metrics endpoints.

Important

The Prometheus Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Prometheus Receiver

# ...
  config: |
    receivers:
        prometheus:
          config:
            scrape_configs: 1
              - job_name: 'my-app'  2
                scrape_interval: 5s 3
                static_configs:
                  - targets: ['my-app.example.svc.cluster.local:8888'] 4
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
# ...

1
Scrapes configurations using the Prometheus format.
2
The Prometheus job name.
3
The lnterval for scraping the metrics data. Accepts time units. The default value is 1m.
4
The targets at which the metrics are exposed. This example scrapes the metrics from a my-app application in the example project.

3.1.7. OTLP JSON File Receiver

The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification. The receiver watches a specified directory for changes such as created or modified files to process.

Important

The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver

# ...
  config: |
    otlpjsonfile:
      include:
        - "/var/log/*.log" 1
      exclude:
        - "/var/log/test.log" 2
# ...

1
The list of file path glob patterns to watch.
2
The list of file path glob patterns to ignore.

3.1.8. Zipkin Receiver

The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats.

OpenTelemetry Collector custom resource with the enabled Zipkin Receiver

# ...
  config: |
    receivers:
      zipkin:
        endpoint: 0.0.0.0:9411 1
        tls: 2
    service:
      pipelines:
        traces:
          receivers: [zipkin]
# ...

1
The Zipkin HTTP endpoint. If omitted, the default 0.0.0.0:9411 is used.
2
The server-side TLS configuration. See the OTLP Receiver configuration section for more details.

3.1.9. Kafka Receiver

The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format.

Important

The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the enabled Kafka Receiver

# ...
  config: |
    receivers:
      kafka:
        brokers: ["localhost:9092"] 1
        protocol_version: 2.0.0 2
        topic: otlp_spans 3
        auth:
          plain_text: 4
            username: example
            password: example
          tls: 5
            ca_file: ca.pem
            cert_file: cert.pem
            key_file: key.pem
            insecure: false 6
            server_name_override: kafka.example.corp 7
    service:
      pipelines:
        traces:
          receivers: [kafka]
# ...

1
The list of Kafka brokers. The default is localhost:9092.
2
The Kafka protocol version. For example, 2.0.0. This is a required field.
3
The name of the Kafka topic to read from. The default is otlp_spans.
4
The plain text authentication configuration. If omitted, plain text authentication is disabled.
5
The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
6
Disables verifying the server’s certificate chain and host name. The default is false.
7
ServerName indicates the name of the server requested by the client to support virtual hosting.

3.1.10. Kubernetes Cluster Receiver

The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts.

Important

The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver

# ...
  receivers:
    k8s_cluster:
      distribution: openshift
      collection_interval: 10s
  exporters:
    debug:
  service:
    pipelines:
      metrics:
        receivers: [k8s_cluster]
        exporters: [debug]
      logs/entity_events:
        receivers: [k8s_cluster]
        exporters: [debug]
# ...

This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account.

ServiceAccount object

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: otelcontribcol
  name: otelcontribcol
# ...

RBAC rules for the ClusterRole object

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otelcontribcol
  labels:
    app: otelcontribcol
rules:
- apiGroups:
  - quota.openshift.io
  resources:
  - clusterresourcequotas
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  - namespaces
  - namespaces/status
  - nodes
  - nodes/spec
  - pods
  - pods/status
  - replicationcontrollers
  - replicationcontrollers/status
  - resourcequotas
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - daemonsets
  - deployments
  - replicasets
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - jobs
  - cronjobs
  verbs:
  - get
  - list
  - watch
- apiGroups:
    - autoscaling
  resources:
    - horizontalpodautoscalers
  verbs:
    - get
    - list
    - watch
# ...

ClusterRoleBinding object

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otelcontribcol
  labels:
    app: otelcontribcol
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: otelcontribcol
subjects:
- kind: ServiceAccount
  name: otelcontribcol
  namespace: default
# ...

3.1.11. OpenCensus Receiver

The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json.

OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver

# ...
  config: |
    receivers:
      opencensus:
        endpoint: 0.0.0.0:9411 1
        tls: 2
        cors_allowed_origins: 3
          - https://*.<example>.com
    service:
      pipelines:
        traces:
          receivers: [opencensus]
# ...

1
The OpenCensus endpoint. If omitted, the default is 0.0.0.0:55678.
2
The server-side TLS configuration. See the OTLP Receiver configuration section for more details.
3
You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with * are accepted under the cors_allowed_origins. To match any origin, enter only *.

3.1.12. Filelog Receiver

The Filelog Receiver tails and parses logs from files.

Important

The Filelog Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the enabled Filelog Receiver that tails a text file

# ...
receivers:
  filelog:
    include: [ /simple.log ] 1
    operators: 2
      - type: regex_parser
        regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%d %H:%M:%S'
        severity:
          parse_from: attributes.sev
# ...

1
A list of file glob patterns that match the file paths to be read.
2
An array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a desired format, chain the Operators together.

3.1.13. Journald Receiver

The Journald Receiver parses journald events from the systemd journal and sends them as logs.

Important

The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the enabled Journald Receiver

apiVersion: v1
kind: Namespace
metadata:
  name: otel-journald
  labels:
    security.openshift.io/scc.podSecurityLabelSync: "false"
    pod-security.kubernetes.io/enforce: "privileged"
    pod-security.kubernetes.io/audit: "privileged"
    pod-security.kubernetes.io/warn: "privileged"
# ...
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: privileged-sa
  namespace: otel-journald
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-journald-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
  name: privileged-sa
  namespace: otel-journald
# ...
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel-journald-logs
  namespace: otel-journald
spec:
  mode: daemonset
  serviceAccount: privileged-sa
  securityContext:
    allowPrivilegeEscalation: false
    capabilities:
      drop:
      - CHOWN
      - DAC_OVERRIDE
      - FOWNER
      - FSETID
      - KILL
      - NET_BIND_SERVICE
      - SETGID
      - SETPCAP
      - SETUID
    readOnlyRootFilesystem: true
    seLinuxOptions:
      type: spc_t
    seccompProfile:
      type: RuntimeDefault
  config: |
    receivers:
      journald:
        files: /var/log/journal/*/*
        priority: info 1
        units: 2
          - kubelet
          - crio
          - init.scope
          - dnsmasq
        all: true 3
        retry_on_failure:
          enabled: true 4
          initial_interval: 1s 5
          max_interval: 30s 6
          max_elapsed_time: 5m 7
    processors:
    exporters:
      debug:
        verbosity: detailed
    service:
      pipelines:
        logs:
          receivers: [journald]
          exporters: [debug]
  volumeMounts:
  - name: journal-logs
    mountPath: /var/log/journal/
    readOnly: true
  volumes:
  - name: journal-logs
    hostPath:
      path: /var/log/journal
  tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
# ...

1
Filters output by message priorities or priority ranges. The default value is info.
2
Lists the units to read entries from. If empty, entries are read from all units.
3
Includes very long logs and logs with unprintable characters. The default value is false.
4
If set to true, the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value is false.
5
The time interval to wait after the first failure before retrying. The default value is 1s. The units are ms, s, m, h.
6
The upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is 30s. The supported units are ms, s, m, h.
7
The maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is 0, retrying never stops. The default value is 5m. The supported units are ms, s, m, h.

3.1.14. Kubernetes Events Receiver

The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs.

Important

The Kubernetes Events Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform permissions required for the Kubernetes Events Receiver

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otel-collector
  labels:
    app: otel-collector
rules:
- apiGroups:
  - ""
  resources:
  - events
  - namespaces
  - namespaces/status
  - nodes
  - nodes/spec
  - pods
  - pods/status
  - replicationcontrollers
  - replicationcontrollers/status
  - resourcequotas
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - daemonsets
  - deployments
  - replicasets
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - jobs
  - cronjobs
  verbs:
  - get
  - list
  - watch
- apiGroups:
    - autoscaling
  resources:
    - horizontalpodautoscalers
  verbs:
    - get
    - list
    - watch
# ...

OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver

# ...
  serviceAccount: otel-collector 1
  config: |
    receivers:
      k8s_events:
        namespaces: [project1, project2] 2
    service:
      pipelines:
        logs:
          receivers: [k8s_events]
# ...

1
The service account of the Collector that has the required ClusterRole otel-collector RBAC.
2
The list of namespaces to collect events from. The default value is empty, which means that all namespaces are collected.

3.1.15. Additional resources

3.2. Processors

Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.

3.2.1. Batch Processor

The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information.

Example of the OpenTelemetry Collector custom resource when using the Batch Processor

# ...
  config: |
    processors:
      batch:
        timeout: 5s
        send_batch_max_size: 10000
    service:
      pipelines:
        traces:
          processors: [batch]
        metrics:
          processors: [batch]
# ...

Table 3.1. Parameters used by the Batch Processor
ParameterDescriptionDefault

timeout

Sends the batch after a specific time duration and irrespective of the batch size.

200ms

send_batch_size

Sends the batch of telemetry data after the specified number of spans or metrics.

8192

send_batch_max_size

The maximum allowable size of the batch. Must be equal or greater than the send_batch_size.

0

metadata_keys

When activated, a batcher instance is created for each unique set of values found in the client.Metadata.

[]

metadata_cardinality_limit

When the metadata_keys are populated, this configuration restricts the number of distinct metadata key-value combinations processed throughout the duration of the process.

1000

3.2.2. Memory Limiter Processor

The Memory Limiter Processor periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run.

Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor

# ...
  config: |
    processors:
      memory_limiter:
        check_interval: 1s
        limit_mib: 4000
        spike_limit_mib: 800
    service:
      pipelines:
        traces:
          processors: [batch]
        metrics:
          processors: [batch]
# ...

Table 3.2. Parameters used by the Memory Limiter Processor
ParameterDescriptionDefault

check_interval

Time between memory usage measurements. The optimal value is 1s. For spiky traffic patterns, you can decrease the check_interval or increase the spike_limit_mib.

0s

limit_mib

The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value.

0

spike_limit_mib

Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of limit_mib. To calculate the soft limit, subtract the spike_limit_mib from the limit_mib.

20% of limit_mib

limit_percentage

Same as the limit_mib but expressed as a percentage of the total available memory. The limit_mib setting takes precedence over this setting.

0

spike_limit_percentage

Same as the spike_limit_mib but expressed as a percentage of the total available memory. Intended to be used with the limit_percentage setting.

0

3.2.3. Resource Detection Processor

The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry’s resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector.

Important

The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform permissions required for the Resource Detection Processor

kind: ClusterRole
metadata:
  name: otel-collector
rules:
- apiGroups: ["config.openshift.io"]
  resources: ["infrastructures", "infrastructures/status"]
  verbs: ["get", "watch", "list"]
# ...

OpenTelemetry Collector using the Resource Detection Processor

# ...
  config: |
    processors:
      resourcedetection:
        detectors: [openshift]
        override: true
    service:
      pipelines:
        traces:
          processors: [resourcedetection]
        metrics:
          processors: [resourcedetection]
# ...

OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector

# ...
  config: |
    processors:
      resourcedetection/env:
        detectors: [env] 1
        timeout: 2s
        override: false
# ...

1
Specifies which detector to use. In this example, the environment detector is specified.

3.2.4. Attributes Processor

The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions.

Important

The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported:

Insert
Inserts a new attribute into the input data when the specified key does not already exist.
Update
Updates an attribute in the input data if the key already exists.
Upsert
Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists.
Delete
Removes an attribute from the input data.
Hash
Hashes an existing attribute value as SHA1.
Extract
Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor’s to_attributes setting with the existing attribute as the source.
Convert
Converts an existing attribute to a specified type.

OpenTelemetry Collector using the Attributes Processor

# ...
  config: |
    processors:
      attributes/example:
        actions:
          - key: db.table
            action: delete
          - key: redacted_span
            value: true
            action: upsert
          - key: copy_key
            from_attribute: key_original
            action: update
          - key: account_id
            value: 2245
            action: insert
          - key: account_password
            action: delete
          - key: account_email
            action: hash
          - key: http.status_code
            action: convert
            converted_type: int
# ...

3.2.5. Resource Processor

The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs.

Important

The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector using the Resource Detection Processor

# ...
  config: |
    processors:
      attributes:
      - key: cloud.availability_zone
        value: "zone-1"
        action: upsert
      - key: k8s.cluster.name
        from_attribute: k8s-cluster
        action: insert
      - key: redundant-attribute
        action: delete
# ...

Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute.

3.2.6. Span Processor

The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces.

Span renaming requires specifying attributes for the new name by using the from_attributes configuration.

Important

The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector using the Span Processor for renaming a span

# ...
  config: |
    processors:
      span:
        name:
          from_attributes: [<key1>, <key2>, ...] 1
          separator: <value> 2
# ...

1
Defines the keys to form the new span name.
2
An optional separator.

You can use this processor to extract attributes from the span name.

OpenTelemetry Collector using the Span Processor for extracting attributes from a span name

# ...
  config: |
    processors:
      span/to_attributes:
        name:
          to_attributes:
            rules:
              - ^\/api\/v1\/document\/(?P<documentId>.*)\/update$ 1
# ...

1
This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update, this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span.

You can have the span status modified.

OpenTelemetry Collector using the Span Processor for status change

# ...
  config: |
    processors:
      span/set_status:
        status:
          code: Error
          description: "<error_description>"
# ...

3.2.7. Kubernetes Attributes Processor

The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata.

Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor

kind: ClusterRole
metadata:
  name: otel-collector
rules:
  - apiGroups: ['']
    resources: ['pods', 'namespaces']
    verbs: ['get', 'watch', 'list']
# ...

OpenTelemetry Collector using the Kubernetes Attributes Processor

# ...
  config: |
    processors:
         k8sattributes:
             filter:
                 node_from_env_var: KUBE_NODE_NAME
# ...

3.2.8. Filter Processor

The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs.

Important

The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled OTLP Exporter

# ...
config: |
  processors:
    filter/ottl:
      error_mode: ignore 1
      traces:
        span:
          - 'attributes["container.name"] == "app_container_1"' 2
          - 'resource.attributes["host.name"] == "localhost"' 3
# ...

1
Defines the error mode. When set to ignore, ignores errors returned by conditions. When set to propagate, returns the error up the pipeline. An error causes the payload to be dropped from the Collector.
2
Filters the spans that have the container.name == app_container_1 attribute.
3
Filters the spans that have the host.name == localhost resource attribute.

3.2.9. Routing Processor

The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value.

Important

The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled OTLP Exporter

# ...
config: |
  processors:
    routing:
      from_attribute: X-Tenant 1
      default_exporters: 2
      - jaeger
      table: 3
      - value: acme
        exporters: [jaeger/acme]
  exporters:
    jaeger:
      endpoint: localhost:14250
    jaeger/acme:
      endpoint: localhost:24250
# ...

1
The HTTP header name for the lookup value when performing the route.
2
The default exporter when the attribute value is not present in the table in the next section.
3
The table that defines which values are to be routed to which exporters.

Optionally, you can create an attribute_source configuration, which defines where to look for the attribute that you specify in the from_attribute field. The supported values are context for searching the context including the HTTP headers, and resource for searching the resource attributes.

3.2.10. Cumulative-to-Delta Processor

The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics.

You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching.

This processor does not convert non-monotonic sums and exponential histograms.

Important

The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor

# ...
config: |
  processors:
    cumulativetodelta:
      include: 1
        match_type: strict 2
        metrics: 3
        - <metric_1_name>
        - <metric_2_name>
      exclude: 4
        match_type: regexp
        metrics:
        - "<regular_expression_for_metric_names>"
# ...

1
Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics.
2
Defines a value provided in the metrics field as a strict exact match or regexp regular expression.
3
Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence.
4
Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics.

3.2.11. Group-by-Attributes Processor

The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes.

Important

The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example:

# ...
processors:
  groupbyattrs:
    keys: 1
      - <key1> 2
      - <key2>
# ...
1
Specifies attribute keys to group by.
2
If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated.

3.2.12. Transform Processor

The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL). For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed.

All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements.

Important

The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Configuration summary

# ...
config: |
  processors:
    transform:
      error_mode: ignore 1
      <trace|metric|log>_statements: 2
        - context: <string> 3
          conditions:  4
            - <string>
            - <string>
          statements: 5
            - <string>
            - <string>
            - <string>
        - context: <string>
          statements:
            - <string>
            - <string>
            - <string>
# ...

1
Optional: See the following table "Values for the optional error_mode field".
2
Indicates a signal to be transformed.
3
See the following table "Values for the context field".
4
Optional: Conditions for performing a transformation.

Configuration example

# ...
config: |
  transform:
    error_mode: ignore
    trace_statements: 1
      - context: resource
        statements:
          - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) 2
          - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") 3
          - limit(attributes, 100, [])
          - truncate_all(attributes, 4096)
      - context: span 4
        statements:
          - set(status.code, 1) where attributes["http.path"] == "/health"
          - set(name, attributes["http.route"])
          - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}")
          - limit(attributes, 100, [])
          - truncate_all(attributes, 4096)
# ...

1
Transforms a trace signal.
2
Keeps keys on the resources.
3
Replaces attributes and replaces string characters in password fields with asterisks.
4
Performs transformations at the span level.
Table 3.3. Values for the context field
Signal StatementValid Contexts

trace_statements

resource, scope, span, spanevent

metric_statements

resource, scope, metric, datapoint

log_statements

resource, scope, log

Table 3.4. Values for the optional error_mode field
ValueDescription

ignore

Ignores and logs errors returned by statements and then continues to the next statement.

silent

Ignores and doesn’t log errors returned by statements and then continues to the next statement.

propagate

Returns errors up the pipeline and drops the payload. Implicit default.

3.2.13. Additional resources

3.3. Exporters

Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.

3.3.1. OTLP Exporter

The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).

OpenTelemetry Collector custom resource with an enabled OTLP Exporter

# ...
  config: |
    exporters:
      otlp:
        endpoint: tempo-ingester:4317 1
        tls: 2
          ca_file: ca.pem
          cert_file: cert.pem
          key_file: key.pem
          insecure: false 3
          insecure_skip_verify: false # 4
          reload_interval: 1h 5
          server_name_override: <name> 6
        headers: 7
          X-Scope-OrgID: "dev"
    service:
      pipelines:
        traces:
          exporters: [otlp]
        metrics:
          exporters: [otlp]
# ...

1
The OTLP gRPC endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls.
2
The client-side TLS configuration. Defines paths to TLS certificates.
3
Disables client transport security when set to true. The default value is false by default.
4
Skips verifying the certificate when set to true. The default value is false.
5
Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval accepts a string containing valid units of time such as ns, us (or µs), ms, s, m, h.
6
Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing.
7
Headers are sent for every request performed during an established connection.

3.3.2. OTLP HTTP Exporter

The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).

OpenTelemetry Collector custom resource with an enabled OTLP Exporter

# ...
  config: |
    exporters:
      otlphttp:
        endpoint: http://tempo-ingester:4318 1
        tls: 2
        headers: 3
          X-Scope-OrgID: "dev"
        disable_keep_alives: false 4

    service:
      pipelines:
        traces:
          exporters: [otlphttp]
        metrics:
          exporters: [otlphttp]
# ...

1
The OTLP HTTP endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls.
2
The client side TLS configuration. Defines paths to TLS certificates.
3
Headers are sent in every HTTP request.
4
If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request.

3.3.3. Debug Exporter

The Debug Exporter prints traces and metrics to the standard output.

OpenTelemetry Collector custom resource with an enabled Debug Exporter

# ...
  config: |
    exporters:
      debug:
        verbosity: detailed 1
        sampling_initial: 5 2
        sampling_thereafter: 200 3
        use_internal_logger: true 4
    service:
      pipelines:
        traces:
          exporters: [debug]
        metrics:
          exporters: [debug]
# ...

1
Verbosity of the debug export: detailed, normal, or basic. When set to detailed, pipeline data are verbosely logged. Defaults to normal.
2
Initial number of messages logged per second. The default value is 2 messages per second.
3
Sampling rate after the initial number of messages, the value in sampling_initial, has been logged. Disabled by default with the default 1 value. Sampling is enabled with values greater than 1. For more information, see the page for the sampler function in the zapcore package on the Go Project’s website.
4
When set to true, enables output from the Collector’s internal logger for the exporter.

3.3.4. Load Balancing Exporter

The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration.

Important

The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Load Balancing Exporter

# ...
  config: |
    exporters:
      loadbalancing:
        routing_key: "service" 1
        protocol:
          otlp: 2
            timeout: 1s
        resolver: 3
          static: 4
            hostnames:
            - backend-1:4317
            - backend-2:4317
          dns: 5
            hostname: otelcol-headless.observability.svc.cluster.local
          k8s: 6
            service: lb-svc.kube-public
            ports:
              - 15317
              - 16317
# ...

1
The routing_key: service exports spans for the same service name to the same Collector instance to provide accurate aggregation. The routing_key: traceID exports spans based on their traceID. The implicit default is traceID based routing.
2
The OTLP is the only supported load-balancing protocol. All options of the OTLP exporter are supported.
3
You can configure only one resolver.
4
The static resolver distributes the load across the listed endpoints.
5
You can use the DNS resolver only with a Kubernetes headless service.
6
The Kubernetes resolver is recommended.

3.3.5. Prometheus Exporter

The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats.

Important

The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Prometheus Exporter

# ...
  config: |
    exporters:
      prometheus:
        endpoint: 0.0.0.0:8889 1
        tls: 2
          ca_file: ca.pem
          cert_file: cert.pem
          key_file: key.pem
        namespace: prefix 3
        const_labels: 4
          label1: value1
        enable_open_metrics: true 5
        resource_to_telemetry_conversion: 6
          enabled: true
        metric_expiration: 180m 7
        add_metric_suffixes: false 8
    service:
      pipelines:
        metrics:
          exporters: [prometheus]
# ...

1
The network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the endpoint field to the <instance_name>-collector service.
2
The server-side TLS configuration. Defines paths to TLS certificates.
3
If set, exports metrics under the provided value.
4
Key-value pair labels that are applied for every exported metric.
5
If true, metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such as counter. Disabled by default.
6
If enabled is true, all the resource attributes are converted to metric labels. Disabled by default.
7
Defines how long metrics are exposed without updates. The default is 5m.
8
Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is true.
Note

When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true, the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics.

3.3.6. Prometheus Remote Write Exporter

The Prometheus Remote Write Exporter exports metrics to compatible back ends.

Important

The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Prometheus Remote Write Exporter

# ...
  config: |
    exporters:
      prometheusremotewrite:
        endpoint: "https://my-prometheus:7900/api/v1/push" 1
        tls: 2
          ca_file: ca.pem
          cert_file: cert.pem
          key_file: key.pem
        target_info: true 3
        export_created_metric: true 4
        max_batch_size_bytes: 3000000 5
    service:
      pipelines:
        metrics:
          exporters: [prometheusremotewrite]
# ...

1
Endpoint for sending the metrics.
2
Server-side TLS configuration. Defines paths to TLS certificates.
3
When set to true, creates a target_info metric for each resource metric.
4
When set to true, exports a _created metric for the Summary, Histogram, and Monotonic Sum metric points.
5
Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is 3000000, which is approximately 2.861 megabytes.
Warning
  • This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics.
  • You must enable the --web.enable-remote-write-receiver feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails.

3.3.7. Kafka Exporter

The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency.

Important

The Kafka Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Kafka Exporter

# ...
  config: |
    exporters:
      kafka:
        brokers: ["localhost:9092"] 1
        protocol_version: 2.0.0 2
        topic: otlp_spans 3
        auth:
          plain_text: 4
            username: example
            password: example
          tls: 5
            ca_file: ca.pem
            cert_file: cert.pem
            key_file: key.pem
            insecure: false 6
            server_name_override: kafka.example.corp 7
    service:
      pipelines:
        traces:
          exporters: [kafka]
# ...

1
The list of Kafka brokers. The default is localhost:9092.
2
The Kafka protocol version. For example, 2.0.0. This is a required field.
3
The name of the Kafka topic to read from. The following are the defaults: otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs.
4
The plain text authentication configuration. If omitted, plain text authentication is disabled.
5
The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
6
Disables verifying the server’s certificate chain and host name. The default is false.
7
ServerName indicates the name of the server requested by the client to support virtual hosting.

3.3.8. Additional resources

3.4. Connectors

A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.

3.4.1. Count Connector

The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines.

Important

The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The following are the default metric names:

  • trace.span.count
  • trace.span.event.count
  • metric.count
  • metric.datapoint.count
  • log.record.count

You can also expose custom metric names.

OpenTelemetry Collector custom resource (CR) with an enabled Count Connector

# ...
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
    exporters:
      prometheus:
        endpoint: 0.0.0.0:8889
    connectors:
      count:
    service:
      pipelines: 1
        traces/in:
          receivers: [otlp]
          exporters: [count] 2
        metrics/out:
          receivers: [count] 3
          exporters: [prometheus]
# ...

1
It is important to correctly configure the Count Connector as an exporter or receiver in the pipeline and to export the generated metrics to the correct exporter.
2
The Count Connector is configured to receive spans as an exporter.
3
The Count Connector is configured to emit generated metrics as a receiver.
Tip

If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data.

The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans, spanevents, metrics, datapoints, or logs. See the next example.

Example OpenTelemetry Collector CR for the Count Connector to count spans by conditions

# ...
  config: |
    connectors:
      count:
        spans: 1
          <custom_metric_name>: 2
            description: "<custom_metric_description>"
            conditions:
              - 'attributes["env"] == "dev"'
              - 'name == "devevent"'
# ...

1
In this example, the exposed metric counts spans with the specified conditions.
2
You can specify a custom metric name such as cluster.prod.event.count.
Tip

Write conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors.

The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans, spanevents, metrics, datapoints, or logs. See the next example. The attribute keys are injected into the telemetry data. You must define a value for the default_value field for missing attributes.

Example OpenTelemetry Collector CR for the Count Connector to count logs by attributes

# ...
  config: |
    connectors:
      count:
        logs: 1
          <custom_metric_name>: 2
            description: "<custom_metric_description>"
            attributes:
              - key: env
                default_value: unknown 3
# ...

1
Specifies attributes for logs.
2
You can specify a custom metric name such as my.log.count.
3
Defines a default value when the attribute is not set.

3.4.2. Routing Connector

The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements.

Important

The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Routing Connector

  config: |
    connectors:
      routing:
        table: 1
          - statement: route() where attributes["X-Tenant"] == "dev" 2
            pipelines: [traces/dev] 3
          - statement: route() where attributes["X-Tenant"] == "prod"
            pipelines: [traces/prod]
        default_pipelines: [traces/dev] 4
        error_mode: ignore 5
        match_once: false 6
    service:
      pipelines:
        traces/in:
          receivers: [otlp]
          exporters: [routing]
        traces/dev:
          receivers: [routing]
          exporters: [otlp/dev]
        traces/prod:
          receivers: [routing]
          exporters: [otlp/prod]

1
Connector routing table.
2
Routing conditions written as OTTL statements.
3
Destination pipelines for routing the matching telemetry data.
4
Destination pipelines for routing the telemetry data for which no routing condition is satisfied.
5
Error-handling mode: The propagate value is for logging an error and dropping the payload. The ignore value is for ignoring the condition and attempting to match with the next one. The silent value is the same as ignore but without logging the error. The default is propagate.
6
When set to true, the payload is routed only to the first pipeline whose routing condition is met. The default is false.

3.4.3. Forward Connector

The Forward Connector merges two pipelines of the same type.

Important

The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled Forward Connector

# ...
receivers:
  otlp:
    protocols:
      grpc:
  jaeger:
    protocols:
      grpc:
processors:
  batch:
exporters:
  otlp:
    endpoint: tempo-simplest-distributor:4317
    tls:
      insecure: true
connectors:
  forward:
service:
  pipelines:
    traces/regiona:
      receivers: [otlp]
      processors: []
      exporters: [forward]
    traces/regionb:
      receivers: [jaeger]
      processors: []
      exporters: [forward]
    traces:
      receivers: [forward]
      processors: [batch]
      exporters: [otlp]
# ...

3.4.4. Spanmetrics Connector

The Spanmetrics Connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.

OpenTelemetry Collector custom resource with an enabled Spanmetrics Connector

# ...
  config: |
    connectors:
      spanmetrics:
        metrics_flush_interval: 15s 1
    service:
      pipelines:
        traces:
          exporters: [spanmetrics]
        metrics:
          receivers: [spanmetrics]
# ...

1
Defines the flush interval of the generated metrics. Defaults to 15s.

3.4.5. Additional resources

3.5. Extensions

Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically.

3.5.1. BearerTokenAuth Extension

The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs.

OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension

# ...
  config: |
    extensions:
      bearertokenauth:
        scheme: "Bearer" 1
        token: "<token>" 2
        filename: "<token_file>" 3

    receivers:
      otlp:
        protocols:
          http:
            auth:
              authenticator: bearertokenauth 4
    exporters:
      otlp:
        auth:
          authenticator: bearertokenauth 5

    service:
      extensions: [bearertokenauth]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
You can configure the BearerTokenAuth Extension to send a custom scheme. The default is Bearer.
2
You can add the BearerTokenAuth Extension token as metadata to identify a message.
3
Path to a file that contains an authorization token that is transmitted with every message.
4
You can assign the authenticator configuration to an OTLP Receiver.
5
You can assign the authenticator configuration to an OTLP Exporter.

3.5.2. OAuth2Client Extension

The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs.

Important

The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension

# ...
  config: |
    extensions:
      oauth2client:
        client_id: <client_id> 1
        client_secret: <client_secret> 2
        endpoint_params: 3
          audience: <audience>
        token_url: https://example.com/oauth2/default/v1/token 4
        scopes: ["api.metrics"] 5
        # tls settings for the token client
        tls: 6
          insecure: true 7
          ca_file: /var/lib/mycert.pem 8
          cert_file: <cert_file> 9
          key_file: <key_file> 10
        timeout: 2s 11

    receivers:
      otlp:
        protocols:
          http: {}

    exporters:
      otlp:
        auth:
          authenticator: oauth2client 12

    service:
      extensions: [oauth2client]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
Client identifier, which is provided by the identity provider.
2
Confidential key used to authenticate the client to the identity provider.
3
Further metadata, in the key-value pair format, which is transferred during authentication. For example, audience specifies the intended audience for the access token, indicating the recipient of the token.
4
The URL of the OAuth2 token endpoint, where the Collector requests access tokens.
5
The scopes define the specific permissions or access levels requested by the client.
6
The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens.
7
When set to true, configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint.
8
The path to a Certificate Authority (CA) file that is used to verify the server’s certificate during the TLS handshake.
9
The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required.
10
The path to the client’s private key file that is used with the client certificate if needed for authentication.
11
Sets a timeout for the token client’s request.
12
You can assign the authenticator configuration to an OTLP exporter.

3.5.3. File Storage Extension

The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols. This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist.

Important

The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue

# ...
  config: |
    extensions:
      file_storage/all_settings:
        directory: /var/lib/otelcol/mydir 1
        timeout: 1s 2
        compaction:
          on_start: true 3
          directory: /tmp/ 4
          max_transaction_size: 65_536 5
        fsync: false 6

    exporters:
      otlp:
        sending_queue:
          storage: file_storage/all_settings 7

    service:
      extensions: [file_storage/all_settings] 8
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
Specifies the directory in which the telemetry data is stored.
2
Specifies the timeout time interval for opening the stored files.
3
Starts compaction when the Collector starts. If omitted, the default is false.
4
Specifies the directory in which the compactor stores the telemetry data.
5
Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is 65536 bytes.
6
When set, forces the database to perform an fsync call after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance.
7
Buffers the OTLP Exporter data on the local file system.
8
Starts the File Storage Extension by the Collector.

3.5.4. OIDC Auth Extension

The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request.

Important

The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the configured OIDC Auth Extension

# ...
  config: |
    extensions:
      oidc:
        attribute: authorization 1
        issuer_url: https://example.com/auth/realms/opentelemetry 2
        issuer_ca_path: /var/run/tls/issuer.pem 3
        audience: otel-collector 4
        username_claim: email 5
    receivers:
      otlp:
        protocols:
          grpc:
            auth:
              authenticator: oidc
    exporters:
      otlp:
        endpoint: <endpoint>
    service:
      extensions: [oidc]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
The name of the header that contains the ID token. The default name is authorization.
2
The base URL of the OIDC provider.
3
Optional: The path to the issuer’s CA certificate.
4
The audience for the token.
5
The name of the claim that contains the username. The default name is sub.

3.5.5. Jaeger Remote Sampling Extension

The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger’s remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system.

Important

The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension

# ...
  config: |
    extensions:
      jaegerremotesampling:
        source:
          reload_interval: 30s 1
          remote:
            endpoint: jaeger-collector:14250 2
          file: /etc/otelcol/sampling_strategies.json 3

    receivers:
      otlp:
        protocols:
          http: {}

    exporters:
      otlp:

    service:
      extensions: [jaegerremotesampling]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
The time interval at which the sampling configuration is updated.
2
The endpoint for reaching the Jaeger remote sampling strategy provider.
3
The path to a local file that contains a sampling strategy configuration in the JSON format.

Example of a Jaeger Remote Sampling strategy file

{
  "service_strategies": [
    {
      "service": "foo",
      "type": "probabilistic",
      "param": 0.8,
      "operation_strategies": [
        {
          "operation": "op1",
          "type": "probabilistic",
          "param": 0.2
        },
        {
          "operation": "op2",
          "type": "probabilistic",
          "param": 0.4
        }
      ]
    },
    {
      "service": "bar",
      "type": "ratelimiting",
      "param": 5
    }
  ],
  "default_strategy": {
    "type": "probabilistic",
    "param": 0.5,
    "operation_strategies": [
      {
        "operation": "/health",
        "type": "probabilistic",
        "param": 0.0
      },
      {
        "operation": "/metrics",
        "type": "probabilistic",
        "param": 0.0
      }
    ]
  }
}

3.5.6. Performance Profiler Extension

The Performance Profiler Extension enables the Go net/http/pprof endpoint. Developers use this extension to collect performance profiles and investigate issues with the service.

Important

The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the configured Performance Profiler Extension

# ...
  config: |
    extensions:
      pprof:
        endpoint: localhost:1777 1
        block_profile_fraction: 0 2
        mutex_profile_fraction: 0 3
        save_to_file: test.pprof 4

    receivers:
      otlp:
        protocols:
          http: {}

    exporters:
      otlp:

    service:
      extensions: [pprof]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
The endpoint at which this extension listens. Use localhost: to make it available only locally or ":" to make it available on all network interfaces. The default value is localhost:1777.
2
Sets a fraction of blocking events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0.
3
Set a fraction of mutex contention events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0.
4
The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated.

3.5.7. Health Check Extension

The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift.

Important

The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the configured Health Check Extension

# ...
  config: |
    extensions:
      health_check:
        endpoint: "0.0.0.0:13133" 1
        tls: 2
          ca_file: "/path/to/ca.crt"
          cert_file: "/path/to/cert.crt"
          key_file: "/path/to/key.key"
        path: "/health/status" 3
        check_collector_pipeline: 4
          enabled: true 5
          interval: "5m" 6
          exporter_failure_threshold: 5 7

    receivers:
      otlp:
        protocols:
          http: {}

    exporters:
      otlp:

    service:
      extensions: [health_check]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
# ...

1
The target IP address for publishing the health check status. The default is 0.0.0.0:13133.
2
The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
3
The path for the health check server. The default is /.
4
Settings for the Collector pipeline health check.
5
Enables the Collector pipeline health check. The default is false.
6
The time interval for checking the number of failures. The default is 5m.
7
The threshold of multiple failures until which a container is still marked as healthy. The default is 5.

3.5.8. zPages Extension

The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time. You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint.

Important

The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with the configured zPages Extension

# ...
  config: |
    extensions:
      zpages:
        endpoint: "localhost:55679" 1

    receivers:
      otlp:
        protocols:
          http: {}
    exporters:
      debug:

    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [debug]
# ...

1
Specifies the HTTP endpoint for serving the zPages extension. The default is localhost:55679.
Important

Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route.

You can enable port-forwarding by running the following oc command:

$ oc port-forward pod/$(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679

The Collector provides the following zPages for diagnostics:

ServiceZ
Shows an overview of the Collector services and links to the following zPages: PipelineZ, ExtensionZ, and FeatureZ. This page also displays information about the build version and runtime. An example of this page’s URL is http://localhost:55679/debug/servicez.
PipelineZ
Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page’s URL is http://localhost:55679/debug/pipelinez.
ExtensionZ
Shows the currently active extensions in the Collector. An example of this page’s URL is http://localhost:55679/debug/extensionz.
FeatureZ
Shows the feature gates enabled in the Collector along with their status and description. An example of this page’s URL is http://localhost:55679/debug/featurez.
TraceZ
Shows spans categorized by latency. Available time ranges include 0 µs, 10 µs, 100 µs, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page’s URL is http://localhost:55679/debug/tracez.

3.5.9. Additional resources

3.6. Target Allocator

The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service.

Important

The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Example OpenTelemetryCollector CR with the enabled Target Allocator

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: observability
spec:
  mode: statefulset 1
  targetAllocator:
    enabled: true 2
    serviceAccount: 3
    prometheusCR:
      enabled: true 4
      scrapeInterval: 10s
      serviceMonitorSelector: 5
        name: app1
      podMonitorSelector: 6
        name: app2
  config: |
    receivers:
      prometheus: 7
        config:
          scrape_configs: []
    processors:
    exporters:
      debug: {}
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          processors: []
          exporters: [debug]
# ...

1
When the Target Allocator is enabled, the deployment mode must be set to statefulset.
2
Enables the Target Allocator. Defaults to false.
3
The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the ServiceMonitor, PodMonitor custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is <collector_name>-targetallocator.
4
Enables integration with the Prometheus PodMonitor and ServiceMonitor custom resources.
5
Label selector for the Prometheus ServiceMonitor custom resources. When left empty, enables all service monitors.
6
Label selector for the Prometheus PodMonitor custom resources. When left empty, enables all pod monitors.
7
Prometheus receiver with the minimal, empty scrape_config: [] configuration option.

The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration.

RBAC configuration for the Target Allocator service account

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otel-targetallocator
rules:
  - apiGroups: [""]
    resources:
      - services
      - pods
    verbs: ["get", "list", "watch"]
  - apiGroups: ["monitoring.coreos.com"]
    resources:
      - servicemonitors
      - podmonitors
    verbs: ["get", "list", "watch"]
  - apiGroups: ["discovery.k8s.io"]
    resources:
      - endpointslices
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-targetallocator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: otel-targetallocator
subjects:
  - kind: ServiceAccount
    name: otel-targetallocator 1
    namespace: observability 2
# ...

1
The name of the Target Allocator service account mane.
2
The namespace of the Target Allocator service account.

Chapter 4. Configuring the instrumentation

The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the configuration of the instrumentation.

4.1. OpenTelemetry instrumentation configuration options

The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server (httpd).

Auto-instrumentation in OpenTelemetry refers to the capability where the framework automatically instruments an application without manual code changes. This enables developers and administrators to get observability into their applications with minimal effort and changes to the existing codebase.

Important

The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images.

4.1.1. Instrumentation options

Instrumentation options are specified in an Instrumentation custom resource (CR).

Sample Instrumentation CR

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: java-instrumentation
spec:
  env:
    - name: OTEL_EXPORTER_OTLP_TIMEOUT
      value: "20"
  exporter:
    endpoint: http://production-collector.observability.svc.cluster.local:4317
  propagators:
    - w3c
  sampler:
    type: parentbased_traceidratio
    argument: "0.25"
  java:
    env:
    - name: OTEL_JAVAAGENT_DEBUG
      value: "true"

Table 4.1. Parameters used by the Operator to define the Instrumentation
ParameterDescriptionValues

env

Common environment variables to define across all the instrumentations.

 

exporter

Exporter configuration.

 

propagators

Propagators defines inter-process context propagation configuration.

tracecontext, baggage, b3, b3multi, jaeger, ottrace, none

resource

Resource attributes configuration.

 

sampler

Sampling configuration.

 

apacheHttpd

Configuration for the Apache HTTP Server instrumentation.

 

dotnet

Configuration for the .NET instrumentation.

 

go

Configuration for the Go instrumentation.

 

java

Configuration for the Java instrumentation.

 

nodejs

Configuration for the Node.js instrumentation.

 

python

Configuration for the Python instrumentation.

 
Table 4.2. Default protocol for auto-instrumentation
Auto-instrumentationDefault protocol

Java 1.x

otlp/grpc

Java 2.x

otlp/http

Python

otlp/http

.NET

otlp/http

Go

otlp/http

Apache HTTP Server

otlp/grpc

4.1.2. Configuration of the OpenTelemetry SDK variables

You can use the instrumentation.opentelemetry.io/inject-sdk annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation CR, into your pod:

  • OTEL_SERVICE_NAME
  • OTEL_TRACES_SAMPLER
  • OTEL_TRACES_SAMPLER_ARG
  • OTEL_PROPAGATORS
  • OTEL_RESOURCE_ATTRIBUTES
  • OTEL_EXPORTER_OTLP_ENDPOINT
  • OTEL_EXPORTER_OTLP_CERTIFICATE
  • OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE
  • OTEL_EXPORTER_OTLP_CLIENT_KEY
Table 4.3. Values for the instrumentation.opentelemetry.io/inject-sdk annotation
ValueDescription

"true"

Injects the Instrumentation resource with the default name from the current namespace.

"false"

Injects no Instrumentation resource.

"<instrumentation_name>"

Specifies the name of the Instrumentation resource to inject from the current namespace.

"<namespace>/<instrumentation_name>"

Specifies the name of the Instrumentation resource to inject from another namespace.

4.1.3. Exporter configuration

Although the Instrumentation custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector.

Sample exporter TLS CA configuration using a config map

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
# ...
spec
# ...
  exporter:
    endpoint: https://production-collector.observability.svc.cluster.local:4317  1
    tls:
      configMapName: ca-bundle  2
      ca_file: service-ca.crt 3
# ...

1
Specifies the OTLP endpoint using the HTTPS scheme and TLS.
2
Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation.
3
Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system.

Sample exporter mTLS configuration using a Secret

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
# ...
spec
# ...
  exporter:
    endpoint: https://production-collector.observability.svc.cluster.local:4317  1
    tls:
      secretName: serving-certs 2
      ca_file: service-ca.crt 3
      cert_file: tls.crt 4
      key_file: tls.key 5
# ...

1
Specifies the OTLP endpoint using the HTTPS scheme and TLS.
2
Specifies the name of the Secret for the ca_file, cert_file, and key_file values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation.
3
Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system.
4
Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system.
5
Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system.
Note

You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret.

Example configuration for CA bundle injection by using a config map and Instrumentation CR

apiVersion: v1
kind: ConfigMap
metadata:
  name: otelcol-cabundle
  namespace: tutorial-application
  annotations:
    service.beta.openshift.io/inject-cabundle: "true"
# ...
---
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: my-instrumentation
spec:
  exporter:
    endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317
    tls:
      configMapName: otelcol-cabundle
      ca: service-ca.crt
# ...

4.1.4. Configuration of the Apache HTTP Server auto-instrumentation

Important

The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Table 4.4. Parameters for the .spec.apacheHttpd field
NameDescriptionDefault

attrs

Attributes specific to the Apache HTTP Server.

 

configPath

Location of the Apache HTTP Server configuration.

/usr/local/apache2/conf

env

Environment variables specific to the Apache HTTP Server.

 

image

Container image with the Apache SDK and auto-instrumentation.

 

resourceRequirements

The compute resource requirements.

 

version

Apache HTTP Server version.

2.4

The PodSpec annotation to enable injection

instrumentation.opentelemetry.io/inject-apache-httpd: "true"

4.1.5. Configuration of the .NET auto-instrumentation

Important

The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

By default, this feature injects unsupported, upstream instrumentation libraries.

NameDescription

env

Environment variables specific to .NET.

image

Container image with the .NET SDK and auto-instrumentation.

resourceRequirements

The compute resource requirements.

For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317. The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port.

The PodSpec annotation to enable injection

instrumentation.opentelemetry.io/inject-dotnet: "true"

4.1.6. Configuration of the Go auto-instrumentation

Important

The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

By default, this feature injects unsupported, upstream instrumentation libraries.

NameDescription

env

Environment variables specific to Go.

image

Container image with the Go SDK and auto-instrumentation.

resourceRequirements

The compute resource requirements.

The PodSpec annotation to enable injection

instrumentation.opentelemetry.io/inject-go: "true"

Additional permissions required for the Go auto-instrumentation in the OpenShift cluster

apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: otel-go-instrumentation-scc
allowHostDirVolumePlugin: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- "SYS_PTRACE"
fsGroup:
  type: RunAsAny
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups:
  type: RunAsAny

Tip

The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows:

$ oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>

4.1.7. Configuration of the Java auto-instrumentation

Important

The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

By default, this feature injects unsupported, upstream instrumentation libraries.

NameDescription

env

Environment variables specific to Java.

image

Container image with the Java SDK and auto-instrumentation.

resourceRequirements

The compute resource requirements.

The PodSpec annotation to enable injection

instrumentation.opentelemetry.io/inject-java: "true"

4.1.8. Configuration of the Node.js auto-instrumentation

Important

The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

By default, this feature injects unsupported, upstream instrumentation libraries.

NameDescription

env

Environment variables specific to Node.js.

image

Container image with the Node.js SDK and auto-instrumentation.

resourceRequirements

The compute resource requirements.

The PodSpec annotations to enable injection

instrumentation.opentelemetry.io/inject-nodejs: "true"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable"

The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable.

4.1.9. Configuration of the Python auto-instrumentation

Important

The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

By default, this feature injects unsupported, upstream instrumentation libraries.

NameDescription

env

Environment variables specific to Python.

image

Container image with the Python SDK and auto-instrumentation.

resourceRequirements

The compute resource requirements.

For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317. Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port.

The PodSpec annotation to enable injection

instrumentation.opentelemetry.io/inject-python: "true"

4.1.10. Multi-container pods

The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection.

Pod annotation

instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>"

Note

The Go auto-instrumentation does not support multi-container auto-instrumentation injection.

4.1.11. Multi-container pods with multiple instrumentations

Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation:

instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>" 1
1
You can inject instrumentation for only one language per container. For the list of supported <application_language> values, see the following table.
Table 4.5. Supported values for the <application_language>
LanguageValue for <application_language>

ApacheHTTPD

apache

DotNet

dotnet

Java

java

NGINX

inject-nginx

NodeJS

nodejs

Python

python

SDK

sdk

4.1.12. Using the instrumentation CR with Service Mesh

When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator.

Chapter 5. Sending traces and metrics to the OpenTelemetry Collector

You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance.

Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.

5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection

You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection.

The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector.

Prerequisites

  • The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.
  • You have access to the cluster through the web console or the OpenShift CLI (oc):

    • You are logged in to the web console as a cluster administrator with the cluster-admin role.
    • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.
    • For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Procedure

  1. Create a project for an OpenTelemetry Collector instance.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  2. Create a service account.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-sidecar
      namespace: observability
  3. Grant the permissions to the service account for the k8sattributes and resourcedetection processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-sidecar
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  4. Deploy the OpenTelemetry Collector as a sidecar.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      serviceAccount: otel-collector-sidecar
      mode: sidecar
      config: |
        serviceAccount: otel-collector-sidecar
        receivers:
          otlp:
            protocols:
              grpc: {}
              http: {}
        processors:
          batch: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
            timeout: 2s
        exporters:
          otlp:
            endpoint: "tempo-<example>-gateway:8090" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger]
              processors: [memory_limiter, resourcedetection, batch]
              exporters: [otlp]
    1
    This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator.
  5. Create your deployment using the otel-collector-sidecar service account.
  6. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance.

5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection

You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables.

Prerequisites

  • The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.
  • You have access to the cluster through the web console or the OpenShift CLI (oc):

    • You are logged in to the web console as a cluster administrator with the cluster-admin role.
    • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.
    • For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Procedure

  1. Create a project for an OpenTelemetry Collector instance.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  2. Create a service account.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment
      namespace: observability
  3. Grant the permissions to the service account for the k8sattributes and resourcedetection processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-deployment
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  4. Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      mode: deployment
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
          opencensus:
          otlp:
            protocols:
              grpc: {}
              http: {}
          zipkin: {}
        processors:
          batch: {}
          k8sattributes: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlp:
            endpoint: "tempo-<example>-distributor:4317" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger, opencensus, otlp, zipkin]
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]
    1
    This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator.
  5. Set the environment variables in the container with your instrumented application.

    NameDescriptionDefault value

    OTEL_SERVICE_NAME

    Sets the value of the service.name resource attribute.

    ""

    OTEL_EXPORTER_OTLP_ENDPOINT

    Base endpoint URL for any signal type with an optionally specified port number.

    https://localhost:4317

    OTEL_EXPORTER_OTLP_CERTIFICATE

    Path to the certificate file for the TLS credentials of the gRPC client.

    https://localhost:4317

    OTEL_TRACES_SAMPLER

    Sampler to be used for traces.

    parentbased_always_on

    OTEL_EXPORTER_OTLP_PROTOCOL

    Transport protocol for the OTLP exporter.

    grpc

    OTEL_EXPORTER_OTLP_TIMEOUT

    Maximum time interval for the OTLP exporter to wait for each batch export.

    10s

    OTEL_EXPORTER_OTLP_INSECURE

    Disables client transport security for gRPC requests. An HTTPS schema overrides it.

    False

Chapter 6. Configuring metrics for the monitoring stack

As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks:

  • Create a Prometheus ServiceMonitor CR for scraping the Collector’s pipeline metrics and the enabled Prometheus exporters.
  • Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack.

6.1. Configuration for sending metrics to the monitoring stack

You can configure the OpenTelemetryCollector custom resource (CR) to create a Prometheus ServiceMonitor CR or a PodMonitor CR for a sidecar deployment. A ServiceMonitor can scrape Collector’s internal metrics endpoint and Prometheus exporter metrics endpoints.

Example of the OpenTelemetry Collector CR with the Prometheus exporter

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
spec:
  mode: deployment
  observability:
    metrics:
      enableMetrics: true 1
  config: |
    exporters:
      prometheus:
        endpoint: 0.0.0.0:8889
        resource_to_telemetry_conversion:
          enabled: true # by default resource attributes are dropped
    service:
      telemetry:
        metrics:
          address: ":8888"
      pipelines:
        metrics:
          receivers: [otlp]
          exporters: [prometheus]

1
Configures the Red Hat build of OpenTelemetry Operator to create the Prometheus ServiceMonitor CR or PodMonitor CR to scrape the Collector’s internal metrics endpoint and the Prometheus exporter metrics endpoints.
Note

Setting enableMetrics to true creates the following two ServiceMonitor instances:

  • One ServiceMonitor instance for the <instance_name>-collector-monitoring service. This ServiceMonitor instance scrapes the Collector’s internal metrics.
  • One ServiceMonitor instance for the <instance_name>-collector service. This ServiceMonitor instance scrapes the metrics exposed by the Prometheus exporter instances.

Alternatively, a manually created Prometheus PodMonitor CR can provide fine control, for example removing duplicated labels added during Prometheus scraping.

Example of the PodMonitor CR that configures the monitoring stack to scrape the Collector metrics

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: otel-collector
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: <cr_name>-collector 1
  podMetricsEndpoints:
  - port: metrics 2
  - port: promexporter 3
    relabelings:
    - action: labeldrop
      regex: pod
    - action: labeldrop
      regex: container
    - action: labeldrop
      regex: endpoint
    metricRelabelings:
    - action: labeldrop
      regex: instance
    - action: labeldrop
      regex: job

1
The name of the OpenTelemetry Collector CR.
2
The name of the internal metrics port for the OpenTelemetry Collector. This port name is always metrics.
3
The name of the Prometheus exporter port for the OpenTelemetry Collector.

6.2. Configuration for receiving metrics from the monitoring stack

A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack.

Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-collector
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-monitoring-view 1
subjects:
  - kind: ServiceAccount
    name: otel-collector
    namespace: observability
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: cabundle
  namespce: observability
  annotations:
    service.beta.openshift.io/inject-cabundle: "true" 2
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: observability
spec:
  volumeMounts:
    - name: cabundle-volume
      mountPath: /etc/pki/ca-trust/source/service-ca
      readOnly: true
  volumes:
    - name: cabundle-volume
      configMap:
        name: cabundle
  mode: deployment
  config: |
    receivers:
      prometheus: 3
        config:
          scrape_configs:
            - job_name: 'federate'
              scrape_interval: 15s
              scheme: https
              tls_config:
                ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt
              bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
              honor_labels: false
              params:
                'match[]':
                  - '{__name__="<metric_name>"}' 4
              metrics_path: '/federate'
              static_configs:
                - targets:
                  - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091"
    exporters:
      debug: 5
        verbosity: detailed
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          processors: []
          exporters: [debug]

1
Assigns the cluster-monitoring-view cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data.
2
Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver.
3
Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack.
4
Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint.
5
Configures the debug exporter to print the metrics to the standard output.

6.3. Additional resources

Chapter 7. Forwarding telemetry data

You can use the OpenTelemetry Collector to forward your telemetry data.

7.1. Forwarding traces to a TempoStack instance

To configure forwarding traces to a TempoStack instance, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources.

Prerequisites

  • The Red Hat build of OpenTelemetry Operator is installed.
  • The Tempo Operator is installed.
  • A TempoStack instance is deployed on the cluster.

Procedure

  1. Create a service account for the OpenTelemetry Collector.

    Example ServiceAccount

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment

  2. Create a cluster role for the service account.

    Example ClusterRole

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
      1
      2
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]

    1
    The k8sattributesprocessor requires permissions for pods and namespaces resources.
    2
    The resourcedetectionprocessor requires permissions for infrastructures and status.
  3. Bind the cluster role to the service account.

    Example ClusterRoleBinding

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-deployment
      namespace: otel-collector-example
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io

  4. Create the YAML file to define the OpenTelemetryCollector custom resource (CR).

    Example OpenTelemetryCollector

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
    spec:
      mode: deployment
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
          opencensus: {}
          otlp:
            protocols:
              grpc: {}
              http: {}
          zipkin: {}
        processors:
          batch: {}
          k8sattributes: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlp:
            endpoint: "tempo-simplest-distributor:4317" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger, opencensus, otlp, zipkin] 2
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]

    1
    The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created.
    2
    The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol.
Tip

You can deploy telemetrygen as a test:

apiVersion: batch/v1
kind: Job
metadata:
  name: telemetrygen
spec:
  template:
    spec:
      containers:
        - name: telemetrygen
          image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest
          args:
            - traces
            - --otlp-endpoint=otel-collector:4317
            - --otlp-insecure
            - --duration=30s
            - --workers=1
      restartPolicy: Never
  backoffLimit: 4

7.2. Forwarding logs to a LokiStack instance

You can deploy the OpenTelemetry Collector with Collector components to forward logs to a LokiStack instance.

This use of the Loki Exporter is a temporary Technology Preview feature that is planned to be replaced with the publication of an improved solution in which the Loki Exporter is replaced with the OTLP HTTP Exporter.

Important

The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • The Red Hat build of OpenTelemetry Operator is installed.
  • The Loki Operator is installed.
  • A supported LokiStack instance is deployed on the cluster.

Procedure

  1. Create a service account for the OpenTelemetry Collector.

    Example ServiceAccount object

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment
      namespace: openshift-logging

  2. Create a cluster role that grants the Collector’s service account the permissions to push logs to the LokiStack application tenant.

    Example ClusterRole object

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector-logs-writer
    rules:
     - apiGroups: ["loki.grafana.com"]
       resourceNames: ["logs"]
       resources: ["application"]
       verbs: ["create"]
     - apiGroups: [""]
       resources: ["pods", "namespaces", "nodes"]
       verbs: ["get", "watch", "list"]
     - apiGroups: ["apps"]
       resources: ["replicasets"]
       verbs: ["get", "list", "watch"]
     - apiGroups: ["extensions"]
       resources: ["replicasets"]
       verbs: ["get", "list", "watch"]

  3. Bind the cluster role to the service account.

    Example ClusterRoleBinding object

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector-logs-writer
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: otel-collector-logs-writer
    subjects:
      - kind: ServiceAccount
        name: otel-collector-deployment
        namespace: openshift-logging

  4. Create an OpenTelemetryCollector custom resource (CR) object.

    Example OpenTelemetryCollector CR object

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: openshift-logging
    spec:
      serviceAccount: otel-collector-deployment
      config:
        extensions:
          bearertokenauth:
            filename: "/var/run/secrets/kubernetes.io/serviceaccount/token"
        receivers:
          otlp:
            protocols:
              grpc: {}
              http: {}
        processors:
          k8sattributes:
            auth_type: "serviceAccount"
            passthrough: false
            extract:
              metadata:
                - k8s.pod.name
                - k8s.container.name
                - k8s.namespace.name
              labels:
              - tag_name: app.label.component
                key: app.kubernetes.io/component
                from: pod
            pod_association:
              - sources:
                  - from: resource_attribute
                    name: k8s.pod.name
                  - from: resource_attribute
                    name: k8s.container.name
                  - from: resource_attribute
                    name: k8s.namespace.name
              - sources:
                  - from: connection
          resource:
            attributes: 1
              - key: loki.format 2
                action: insert
                value: json
              - key:  kubernetes_namespace_name
                from_attribute: k8s.namespace.name
                action: upsert
              - key:  kubernetes_pod_name
                from_attribute: k8s.pod.name
                action: upsert
              - key: kubernetes_container_name
                from_attribute: k8s.container.name
                action: upsert
              - key: log_type
                value: application
                action: upsert
              - key: loki.resource.labels 3
                value: log_type, kubernetes_namespace_name, kubernetes_pod_name, kubernetes_container_name
                action: insert
          transform:
            log_statements:
              - context: log
                statements:
                  - set(attributes["level"], ConvertCase(severity_text, "lower"))
        exporters:
          loki:
            endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/loki/api/v1/push 4
            tls:
              ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
            auth:
              authenticator: bearertokenauth
          debug:
            verbosity: detailed
        service:
          extensions: [bearertokenauth] 5
          pipelines:
            logs:
              receivers: [otlp]
              processors: [k8sattributes, transform, resource]
              exporters: [loki] 6
            logs/test:
              receivers: [otlp]
              processors: []
              exporters: [debug]

    1
    Provides the following resource attributes to be used by the web console: kubernetes_namespace_name, kubernetes_pod_name, kubernetes_container_name, and log_type. If you specify them as values for this loki.resource.labels attribute, then the Loki Exporter processes them as labels.
    2
    Configures the format of Loki logs. Supported values are json, logfmt and raw.
    3
    Configures which resource attributes are processed as Loki labels.
    4
    Points the Loki Exporter to the gateway of the LokiStack logging-loki instance and uses the application tenant.
    5
    Enables the BearerTokenAuth Extension that is required by the Loki Exporter.
    6
    Enables the Loki Exporter to export logs from the Collector.
Tip

You can deploy telemetrygen as a test:

apiVersion: batch/v1
kind: Job
metadata:
  name: telemetrygen
spec:
  template:
    spec:
      containers:
        - name: telemetrygen
          image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1
          args:
            - logs
            - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317
            - --otlp-insecure
            - --duration=180s
            - --workers=1
            - --logs=10
            - --otlp-attributes=k8s.container.name="telemetrygen"
      restartPolicy: Never
  backoffLimit: 4

Chapter 8. Configuring the OpenTelemetry Collector metrics

The following list shows some of these metrics:

  • Collector memory usage
  • CPU utilization
  • Number of active traces and spans processed
  • Dropped spans, logs, or metrics
  • Exporter and receiver statistics

The Red Hat build of OpenTelemetry Operator automatically creates a service named <instance_name>-collector-monitoring that exposes the Collector’s internal metrics. This service listens on port 8888 by default.

You can use these metrics for monitoring the Collector’s performance, resource consumption, and other internal behaviors. You can also use a Prometheus instance or another monitoring tool to scrape these metrics from the mentioned <instance_name>-collector-monitoring service.

Note

When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true, the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics.

Prerequisites

  • Monitoring for user-defined projects is enabled in the cluster.

Procedure

  • To enable metrics of an OpenTelemetry Collector instance, set the spec.observability.metrics.enableMetrics field to true:

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: <name>
    spec:
      observability:
        metrics:
          enableMetrics: true

Verification

You can use the Administrator view of the web console to verify successful configuration:

  1. Go to ObserveTargets.
  2. Filter by Source: User.
  3. Check that the ServiceMonitors or PodMonitors in the opentelemetry-collector-<instance_name> format have the Up status.

Chapter 9. Gathering the observability data from multiple clusters

For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance.

Prerequisites

  • The Red Hat build of OpenTelemetry Operator is installed.
  • The Tempo Operator is installed.
  • A TempoStack instance is deployed on the cluster.
  • The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1.

Procedure

  1. Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates.

    1. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift.

      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: selfsigned-issuer
      spec:
        selfSigned: {}
    2. A self-signed certificate.

      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: ca
      spec:
        isCA: true
        commonName: ca
        subject:
          organizations:
            - Organization # <your_organization_name>
          organizationalUnits:
            - Widgets
        secretName: ca-secret
        privateKey:
          algorithm: ECDSA
          size: 256
        issuerRef:
          name: selfsigned-issuer
          kind: Issuer
          group: cert-manager.io
    3. A CA issuer.

      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: test-ca-issuer
      spec:
        ca:
          secretName: ca-secret
    4. The client and server certificates.

      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: server
      spec:
        secretName: server-tls
        isCA: false
        usages:
          - server auth
          - client auth
        dnsNames:
        - "otel.observability.svc.cluster.local" 1
        issuerRef:
          name: ca-issuer
      ---
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: client
      spec:
        secretName: client-tls
        isCA: false
        usages:
          - server auth
          - client auth
        dnsNames:
        - "otel.observability.svc.cluster.local" 2
        issuerRef:
          name: ca-issuer
      1
      List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance.
      2
      List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance.
  2. Create a service account for the OpenTelemetry Collector instance.

    Example ServiceAccount

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment

  3. Create a cluster role for the service account.

    Example ClusterRole

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
      1
      2
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]

    1
    The k8sattributesprocessor requires permissions for pods and namespace resources.
    2
    The resourcedetectionprocessor requires permissions for infrastructures and status.
  4. Bind the cluster role to the service account.

    Example ClusterRoleBinding

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-deployment
      namespace: otel-collector-<example>
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io

  5. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters.

    Example OpenTelemetryCollector custom resource for the edge clusters

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: otel-collector-<example>
    spec:
      mode: daemonset
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
          opencensus:
          otlp:
            protocols:
              grpc: {}
              http: {}
          zipkin: {}
        processors:
          batch: {}
          k8sattributes: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlphttp:
            endpoint: https://observability-cluster.com:443 1
            tls:
              insecure: false
              cert_file: /certs/server.crt
              key_file: /certs/server.key
              ca_file: /certs/ca.crt
        service:
          pipelines:
            traces:
              receivers: [jaeger, opencensus, otlp, zipkin]
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]
      volumes:
        - name: otel-certs
          secret:
            name: otel-certs
      volumeMounts:
        - name: otel-certs
          mountPath: /certs

    1
    The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster.
  6. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster.

    Example OpenTelemetryCollector custom resource for the central cluster

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otlp-receiver
      namespace: observability
    spec:
      mode: "deployment"
      ingress:
        type: route
        route:
          termination: "passthrough"
      config: |
        receivers:
          otlp:
            protocols:
              http:
                tls: 1
                  cert_file: /certs/server.crt
                  key_file: /certs/server.key
                  client_ca_file: /certs/ca.crt
        exporters:
          logging: {}
          otlp:
            endpoint: "tempo-<simplest>-distributor:4317" 2
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [otlp]
              processors: []
              exporters: [otlp]
      volumes:
        - name: otel-certs
          secret:
            name: otel-certs
      volumeMounts:
        - name: otel-certs
          mountPath: /certs

    1
    The Collector receiver requires the certificates listed in the first step.
    2
    The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created.

Chapter 10. Troubleshooting

The OpenTelemetry Collector offers multiple ways to measure its health as well as investigate data ingestion issues.

10.1. Collecting diagnostic data from the command line

When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather tool to gather diagnostic data for resources of various types, such as OpenTelemetryCollector, Instrumentation, and the created resources like Deployment, Pod, or ConfigMap. The oc adm must-gather tool creates a new pod that collects this data.

Procedure

  • From the directory where you want to save the collected data, run the oc adm must-gather command to collect the data:

    $ oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- \
    /usr/bin/must-gather --operator-namespace <operator_namespace> 1
    1
    The default namespace where the Operator is installed is openshift-opentelemetry-operator.

Verification

  • Verify that the new directory is created and contains the collected data.

10.2. Getting the OpenTelemetry Collector logs

You can get the logs for the OpenTelemetry Collector as follows.

Procedure

  1. Set the relevant log level in the OpenTelemetryCollector custom resource (CR):

      config: |
        service:
          telemetry:
            logs:
              level: debug 1
    1
    Collector’s log level. Supported values include info, warn, error, or debug. Defaults to info.
  2. Use the oc logs command or the web console to retrieve the logs.

10.3. Exposing the metrics

The OpenTelemetry Collector exposes the metrics about the data volumes it has processed. The following metrics are for spans, although similar metrics are exposed for metrics and logs signals:

otelcol_receiver_accepted_spans
The number of spans successfully pushed into the pipeline.
otelcol_receiver_refused_spans
The number of spans that could not be pushed into the pipeline.
otelcol_exporter_sent_spans
The number of spans successfully sent to the destination.
otelcol_exporter_enqueue_failed_spans
The number of spans failed to be added to the sending queue.

The Operator creates a <cr_name>-collector-monitoring telemetry service that you can use to scrape the metrics endpoint.

Procedure

  1. Enable the telemetry service by adding the following lines in the OpenTelemetryCollector custom resource (CR):

    # ...
      config: |
        service:
          telemetry:
            metrics:
              address: ":8888" 1
    # ...
    1
    The address at which the internal collector metrics are exposed. Defaults to :8888.
  2. Retrieve the metrics by running the following command, which uses the port-forwarding Collector pod:

    $ oc port-forward <collector_pod>
  3. In the OpenTelemetryCollector CR, set the enableMetrics field to true to scrape internal metrics:

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    spec:
    # ...
      mode: deployment
      observability:
        metrics:
          enableMetrics: true
    # ...

    Depending on the deployment mode of the OpenTelemetry Collector, the internal metrics are scraped by using PodMonitors or ServiceMonitors.

    Note

    Alternatively, if you do not set the enableMetrics field to true, you can access the metrics endpoint at http://localhost:8888/metrics.

  4. On the Observe page in the web console, enable User Workload Monitoring to visualize the scraped metrics.

    Note

    Not all processors expose the required metrics.

  5. In the web console, go to ObserveDashboards and select the OpenTelemetry Collector dashboard from the drop-down list to view it.

    Tip

    You can filter the visualized data such as spans or metrics by the Collector instance, namespace, or OpenTelemetry components such as processors, receivers, or exporters.

10.4. Debug exporter

You can configure the debug exporter to export the collected data to the standard output.

Procedure

  1. Configure the OpenTelemetryCollector custom resource as follows:

      config: |
        exporters:
          debug:
            verbosity: detailed
        service:
          pipelines:
            traces:
              exporters: [debug]
            metrics:
              exporters: [debug]
            logs:
              exporters: [debug]
  2. Use the oc logs command or the web console to export the logs to the standard output.

Chapter 11. Migrating

Important

The Red Hat OpenShift distributed tracing platform (Jaeger) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project.

The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications.

Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments.

11.1. Migrating with sidecars

The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar.

Prerequisites

  • The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster.
  • The Red Hat build of OpenTelemetry is installed.

Procedure

  1. Configure the OpenTelemetry Collector as a sidecar.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: <otel-collector-namespace>
    spec:
      mode: sidecar
      config: |
        receivers:
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
        processors:
          batch: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
            timeout: 2s
        exporters:
          otlp:
            endpoint: "tempo-<example>-gateway:8090" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger]
              processors: [memory_limiter, resourcedetection, batch]
              exporters: [otlp]
    1
    This endpoint points to the Gateway of a TempoStack instance deployed by using the <example> Tempo Operator.
  2. Create a service account for running your application.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-sidecar
  3. Create a cluster role for the permissions needed by some processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector-sidecar
    rules:
      1
    - apiGroups: ["config.openshift.io"]
      resources: ["infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    1
    The resourcedetectionprocessor requires permissions for infrastructures and infrastructures/status.
  4. Create a ClusterRoleBinding to set the permissions for the service account.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector-sidecar
    subjects:
    - kind: ServiceAccount
      name: otel-collector-deployment
      namespace: otel-collector-example
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  5. Deploy the OpenTelemetry Collector as a sidecar.
  6. Remove the injected Jaeger Agent from your application by removing the "sidecar.jaegertracing.io/inject": "true" annotation from your Deployment object.
  7. Enable automatic injection of the OpenTelemetry sidecar by adding the sidecar.opentelemetry.io/inject: "true" annotation to the .spec.template.metadata.annotations field of your Deployment object.
  8. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces.

11.2. Migrating without sidecars

You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment.

Prerequisites

  • The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster.
  • The Red Hat build of OpenTelemetry is installed.

Procedure

  1. Configure OpenTelemetry Collector deployment.
  2. Create the project where the OpenTelemetry Collector will be deployed.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  3. Create a service account for running the OpenTelemetry Collector instance.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment
      namespace: observability
  4. Create a cluster role for setting the required permissions for the processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
      1
      2
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    1
    Permissions for the pods and namespaces resources are required for the k8sattributesprocessor.
    2
    Permissions for infrastructures and infrastructures/status are required for resourcedetectionprocessor.
  5. Create a ClusterRoleBinding to set the permissions for the service account.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-deployment
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  6. Create the OpenTelemetry Collector instance.

    Note

    This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      mode: deployment
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
        processors:
          batch: {}
          k8sattributes:
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlp:
            endpoint: "tempo-example-gateway:8090"
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger]
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]
  7. Point your tracing endpoint to the OpenTelemetry Operator.
  8. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint.

    Example of exporting traces by using the jaegerexporter with Golang

    exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1

    1
    The URL points to the OpenTelemetry Collector API endpoint.

Chapter 12. Upgrading

For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.

The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators.

When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator’s new version.

12.1. Additional resources

Chapter 13. Removing

The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows:

  1. Shut down all Red Hat build of OpenTelemetry pods.
  2. Remove any OpenTelemetryCollector instances.
  3. Remove the Red Hat build of OpenTelemetry Operator.

13.1. Removing an OpenTelemetry Collector instance by using the web console

You can remove an OpenTelemetry Collector instance in the Administrator view of the web console.

Prerequisites

  • You are logged in to the web console as a cluster administrator with the cluster-admin role.
  • For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role.

Procedure

  1. Go to OperatorsInstalled OperatorsRed Hat build of OpenTelemetry OperatorOpenTelemetryInstrumentation or OpenTelemetryCollector.
  2. To remove the relevant instance, select kebabDelete …​ → Delete.
  3. Optional: Remove the Red Hat build of OpenTelemetry Operator.

13.2. Removing an OpenTelemetry Collector instance by using the CLI

You can remove an OpenTelemetry Collector instance on the command line.

Prerequisites

  • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.

    Tip
    • Ensure that your OpenShift CLI (oc) version is up to date and matches your OpenShift Container Platform version.
    • Run oc login:

      $ oc login --username=<your_username>

Procedure

  1. Get the name of the OpenTelemetry Collector instance by running the following command:

    $ oc get deployments -n <project_of_opentelemetry_instance>
  2. Remove the OpenTelemetry Collector instance by running the following command:

    $ oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>
  3. Optional: Remove the Red Hat build of OpenTelemetry Operator.

Verification

  • To verify successful removal of the OpenTelemetry Collector instance, run oc get deployments again:

    $ oc get deployments -n <project_of_opentelemetry_instance>

13.3. Additional resources

Legal Notice

Copyright © 2024 Red Hat, Inc.

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.