Red Hat build of OpenTelemetry
Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform
Abstract
Chapter 1. Release notes for the Red Hat build of OpenTelemetry Copy linkLink copied to clipboard!
1.1. Red Hat build of OpenTelemetry overview Copy linkLink copied to clipboard!
Red Hat build of OpenTelemetry is based on the open source OpenTelemetry project, which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat build of OpenTelemetry product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.
The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.
The OpenTelemetry Collector has a number of features including the following:
- Data Collection and Processing Hub
- It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure.
- Customizable telemetry data pipeline
- The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers.
- Auto-instrumentation features
- Automatic instrumentation simplifies the process of adding observability to applications. Developers don’t need to manually instrument their code for basic telemetry data.
Here are some of the use cases for the OpenTelemetry Collector:
- Centralized data collection
- In a microservices architecture, the Collector can be deployed to aggregate data from multiple services.
- Data enrichment and processing
- Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data.
- Multi-backend receiving and exporting
- The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously.
You can use the Red Hat build of OpenTelemetry in combination with the Red Hat OpenShift Distributed Tracing Platform.
Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat’s support.
1.2. Release notes for Red Hat build of OpenTelemetry 3.6.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry 3.6.1 is provided through the Red Hat build of OpenTelemetry Operator 0.127.0.
The Red Hat build of OpenTelemetry 3.6.1 is based on the open source OpenTelemetry release 0.127.0.
1.2.1. CVEs Copy linkLink copied to clipboard!
This release fixes the following CVEs:
1.2.2. Known issues Copy linkLink copied to clipboard!
There is currently a known issue with the following exporters:
- AWS CloudWatch Logs Exporter
- AWS EMF Exporter
- AWS X-Ray Exporter
This known issue affects deployments that use the optional endpoint
field of the exporter configuration in the OpenTelemetryCollector
custom resource. Not specifying the protocol, such as https://
, as part of the endpoint value results in the unsupported protocol scheme
error.
Workaround: Include the protocol, such as https://
, as part of the endpoint value.
1.3. Release notes for Red Hat build of OpenTelemetry 3.6 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry 3.6 is provided through the Red Hat build of OpenTelemetry Operator 0.127.0.
The Red Hat build of OpenTelemetry 3.6 is based on the open source OpenTelemetry release 0.127.0.
1.3.1. CVEs Copy linkLink copied to clipboard!
This release fixes the following CVEs:
1.3.2. Technology Preview features Copy linkLink copied to clipboard!
This update introduces the following Technology Preview features:
- Tail Sampling Processor
- Cumulative-to-Delta Processor
Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.3.3. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancements:
The following Technology Preview features reach General Availability:
- Kafka Exporter
- Attributes Processor
- Resource Processor
- Prometheus Receiver
-
With this update, the OpenTelemetry Collector can read TLS certificates in the
tss2
format according to the TPM Software Stack specification (TSS) 2.0 of the Trusted Platform Module (TPM) 2.0 Library by the Trusted Computing Group (TCG). -
With this update, the Red Hat build of OpenTelemetry Operator automatically upgrades all
OpenTelemetryCollector
custom resources during its startup. The Operator reconciles all managed instances during its startup. If there is an error, the Operator retries the upgrade at exponential backoff. If an upgrade fails, the Operator will retry the upgrade again when it restarts.
1.3.4. Removal notice Copy linkLink copied to clipboard!
In the Red Hat build of OpenTelemetry 3.6, the Loki Exporter, which is a temporary Technology Preview feature, is removed. If you currently use the Loki Exporter for Loki 3.0 or later, replace the Loki Exporter with the OTLP HTTP Exporter.
The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.3.5. Known issues Copy linkLink copied to clipboard!
There is currently a known issue with the following exporters:
- AWS CloudWatch Logs Exporter
- AWS EMF Exporter
- AWS X-Ray Exporter
This known issue affects deployments that use the optional endpoint
field of the exporter configuration in the OpenTelemetryCollector
custom resource. Not specifying the protocol, such as https://
, as part of the endpoint value results in the unsupported protocol scheme
error.
Workaround: Include the protocol, such as https://
, as part of the endpoint value.
1.4. Release notes for Red Hat build of OpenTelemetry 3.5.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry 3.5.1 is provided through the Red Hat build of OpenTelemetry Operator 0.119.0.
The Red Hat build of OpenTelemetry 3.5.1 is based on the open source OpenTelemetry release 0.119.0.
1.4.1. CVEs Copy linkLink copied to clipboard!
This release fixes the following CVEs:
1.5. Release notes for Red Hat build of OpenTelemetry 3.5 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry 3.5 is provided through the Red Hat build of OpenTelemetry Operator 0.119.0.
The Red Hat build of OpenTelemetry 3.5 is based on the open source OpenTelemetry release 0.119.0.
1.5.1. Technology Preview features Copy linkLink copied to clipboard!
This update introduces the following Technology Preview features:
- AWS CloudWatch Exporter
- AWS EMF Exporter
- AWS X-Ray Exporter
Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.5.2. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancements:
The following Technology Preview features reach General Availability:
- Host Metrics Receiver
- Kubelet Stats Receiver
- With this update, the OpenTelemetry Collector uses the OTLP HTTP Exporter to push logs to a LokiStack instance.
-
With this update, the Operator automatically creates RBAC rules for the Kubernetes Events Receiver (
k8sevents
), Kubernetes Cluster Receiver (k8scluster
), and Kubernetes Objects Receiver (k8sobjects
) if the Operator has sufficient permissions. For more information, see "Creating the required RBAC resources automatically" in Configuring the Collector.
1.5.3. Deprecated functionality Copy linkLink copied to clipboard!
In the Red Hat build of OpenTelemetry 3.5, the Loki Exporter, which is a temporary Technology Preview feature, is deprecated. The Loki Exporter is planned to be removed in the Red Hat build of OpenTelemetry 3.6. If you currently use the Loki Exporter for the OpenShift Logging 6.1 or later, replace the Loki Exporter with the OTLP HTTP Exporter.
The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.5.4. Bug fixes Copy linkLink copied to clipboard!
This update introduces the following bug fix:
- Before this update, manually created routes for the Collector services were unintentionally removed when the Operator pod was restarted. With this update, restarting the Operator pod does not result in the removal of the manually created routes.
1.6. Release notes for Red Hat build of OpenTelemetry 3.4 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry 3.4 is provided through the Red Hat build of OpenTelemetry Operator 0.113.0.
The Red Hat build of OpenTelemetry 3.4 is based on the open source OpenTelemetry release 0.113.0.
1.6.1. Technology Preview features Copy linkLink copied to clipboard!
This update introduces the following Technology Preview features:
- OpenTelemetry Protocol (OTLP) JSON File Receiver
- Count Connector
Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.6.2. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancements:
The following Technology Preview features reach General Availability:
- BearerTokenAuth Extension
- Kubernetes Attributes Processor
- Spanmetrics Connector
-
You can use the
instrumentation.opentelemetry.io/inject-sdk
annotation with theInstrumentation
custom resource to enable injection of the OpenTelemetry SDK environment variables into multi-container pods.
1.6.3. Removal notice Copy linkLink copied to clipboard!
In the Red Hat build of OpenTelemetry 3.4, the Logging Exporter has been removed from the Collector. As an alternative, you must use the Debug Exporter instead.
WarningIf you have the Logging Exporter configured, upgrading to the Red Hat build of OpenTelemetry 3.4 will cause crash loops. To avoid such issues, you must configure the Red Hat build of OpenTelemetry to use the Debug Exporter instead of the Logging Exporter before upgrading to the Red Hat build of OpenTelemetry 3.4.
-
In the Red Hat build of OpenTelemetry 3.4, the Technology Preview Memory Ballast Extension has been removed. As an alternative, you can use the
GOMEMLIMIT
environment variable instead.
1.7. Release notes for Red Hat build of OpenTelemetry 3.3.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
The Red Hat build of OpenTelemetry 3.3.1 is based on the open source OpenTelemetry release 0.107.0.
1.7.1. Bug fixes Copy linkLink copied to clipboard!
This update introduces the following bug fix:
- Before this update, injection of the NGINX auto-instrumentation failed when copying the instrumentation libraries into the application container. With this update, the copy command is configured correctly, which fixes the issue. (TRACING-4673)
1.8. Release notes for Red Hat build of OpenTelemetry 3.3 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
The Red Hat build of OpenTelemetry 3.3 is based on the open source OpenTelemetry release 0.107.0.
1.8.1. CVEs Copy linkLink copied to clipboard!
This release fixes the following CVEs:
1.8.2. Technology Preview features Copy linkLink copied to clipboard!
This update introduces the following Technology Preview features:
- Group-by-Attributes Processor
- Transform Processor
- Routing Connector
- Prometheus Remote Write Exporter
- Exporting logs to the LokiStack log store
Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.8.3. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancements:
- Collector dashboard for the internal Collector metrics and analyzing Collector health and performance. (TRACING-3768)
- Support for automatically reloading certificates in both the OpenTelemetry Collector and instrumentation. (TRACING-4186)
1.8.4. Bug fixes Copy linkLink copied to clipboard!
This update introduces the following bug fixes:
-
Before this update, the
ServiceMonitor
object was failing to scrape operator metrics due to missing permissions for accessing the metrics endpoint. With this update, this issue is fixed by creating theServiceMonitor
custom resource when operator monitoring is enabled. (TRACING-4288) -
Before this update, the Collector service and the headless service were both monitoring the same endpoints, which caused duplication of metrics collection and
ServiceMonitor
objects. With this update, this issue is fixed by not creating the headless service. (OBSDA-773)
1.9. Release notes for Red Hat build of OpenTelemetry 3.2.2 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
1.9.1. CVEs Copy linkLink copied to clipboard!
This release fixes the following CVEs:
1.9.2. Bug fixes Copy linkLink copied to clipboard!
This update introduces the following bug fix:
-
Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new
openshift.io/internal-registry-pull-secret-ref
annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. (TRACING-4435)
1.10. Release notes for Red Hat build of OpenTelemetry 3.2.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
1.10.1. CVEs Copy linkLink copied to clipboard!
This release fixes the following CVEs:
1.10.2. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancement:
- Red Hat build of OpenTelemetry 3.2.1 is based on the open source OpenTelemetry release 0.102.1.
1.11. Release notes for Red Hat build of OpenTelemetry 3.2 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
1.11.1. Technology Preview features Copy linkLink copied to clipboard!
This update introduces the following Technology Preview features:
- Host Metrics Receiver
- OIDC Auth Extension
- Kubernetes Cluster Receiver
- Kubernetes Events Receiver
- Kubernetes Objects Receiver
- Load-Balancing Exporter
- Kubelet Stats Receiver
- Cumulative to Delta Processor
- Forward Connector
- Journald Receiver
- Filelog Receiver
- File Storage Extension
Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.11.2. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancement:
- Red Hat build of OpenTelemetry 3.2 is based on the open source OpenTelemetry release 0.100.0.
1.11.3. Deprecated functionality Copy linkLink copied to clipboard!
In Red Hat build of OpenTelemetry 3.2, use of empty values and null
keywords in the OpenTelemetry Collector custom resource is deprecated and planned to be unsupported in a future release. Red Hat will provide bug fixes and support for this syntax during the current release lifecycle, but this syntax will become unsupported. As an alternative to empty values and null
keywords, you can update the OpenTelemetry Collector custom resource to contain empty JSON objects as open-closed braces {}
instead.
1.11.4. Bug fixes Copy linkLink copied to clipboard!
This update introduces the following bug fix:
-
Before this update, the checkbox to enable Operator monitoring was not available in the web console when installing the Red Hat build of OpenTelemetry Operator. As a result, a ServiceMonitor resource was not created in the
openshift-opentelemetry-operator
namespace. With this update, the checkbox appears for the Red Hat build of OpenTelemetry Operator in the web console so that Operator monitoring can be enabled during installation. (TRACING-3761)
1.12. Release notes for Red Hat build of OpenTelemetry 3.1.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
1.12.1. CVEs Copy linkLink copied to clipboard!
This release fixes CVE-2023-39326.
1.13. Release notes for Red Hat build of OpenTelemetry 3.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator.
1.13.1. Technology Preview features Copy linkLink copied to clipboard!
This update introduces the following Technology Preview feature:
-
The target allocator is an optional component of the OpenTelemetry Operator that shards Prometheus receiver scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator provides integration with the Prometheus
PodMonitor
andServiceMonitor
custom resources.
The target allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.13.2. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancement:
- Red Hat build of OpenTelemetry 3.1 is based on the open source OpenTelemetry release 0.93.0.
1.14. Release notes for Red Hat build of OpenTelemetry 3.0 Copy linkLink copied to clipboard!
1.14.1. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancements:
- Red Hat build of OpenTelemetry 3.0 is based on the open source OpenTelemetry release 0.89.0.
- The OpenShift distributed tracing data collection Operator is renamed as the Red Hat build of OpenTelemetry Operator.
- Support for the ARM architecture.
- Support for the Prometheus receiver for metrics collection.
- Support for the Kafka receiver and exporter for sending traces and metrics to Kafka.
- Support for cluster-wide proxy environments.
-
The Red Hat build of OpenTelemetry Operator creates the Prometheus
ServiceMonitor
custom resource if the Prometheus exporter is enabled. -
The Operator enables the
Instrumentation
custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries.
1.14.2. Removal notice Copy linkLink copied to clipboard!
In Red Hat build of OpenTelemetry 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead.
1.14.3. Bug fixes Copy linkLink copied to clipboard!
This update introduces the following bug fixes:
-
Fixed support for disconnected environments when using the
oc adm catalog mirror
CLI command.
1.14.4. Known issues Copy linkLink copied to clipboard!
There is currently a known issue:
Currently, the cluster monitoring of the Red Hat build of OpenTelemetry Operator is disabled due to a bug (TRACING-3761). The bug is preventing the cluster monitoring from scraping metrics from the Red Hat build of OpenTelemetry Operator due to a missing label
openshift.io/cluster-monitoring=true
that is required for the cluster monitoring and service monitor object.Workaround
You can enable the cluster monitoring as follows:
-
Add the following label in the Operator namespace:
oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true
Create a service monitor, role, and role binding:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Add the following label in the Operator namespace:
1.15. Release notes for Red Hat build of OpenTelemetry 2.9.2 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.9.2 is based on the open source OpenTelemetry release 0.81.0.
1.15.1. CVEs Copy linkLink copied to clipboard!
- This release fixes CVE-2023-46234.
1.15.2. Known issues Copy linkLink copied to clipboard!
There is currently a known issue:
- Currently, you must manually set Operator maturity to Level IV, Deep Insights. (TRACING-3431)
1.16. Release notes for Red Hat build of OpenTelemetry 2.9.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.9.1 is based on the open source OpenTelemetry release 0.81.0.
1.16.1. CVEs Copy linkLink copied to clipboard!
- This release fixes CVE-2023-44487.
1.16.2. Known issues Copy linkLink copied to clipboard!
There is currently a known issue:
- Currently, you must manually set Operator maturity to Level IV, Deep Insights. (TRACING-3431)
1.17. Release notes for Red Hat build of OpenTelemetry 2.9 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.9 is based on the open source OpenTelemetry release 0.81.0.
1.17.1. New features and enhancements Copy linkLink copied to clipboard!
This release introduces the following enhancements for the Red Hat build of OpenTelemetry:
-
Support OTLP metrics ingestion. The metrics can be forwarded and stored in the
user-workload-monitoring
via the Prometheus exporter. -
Support the Operator maturity Level IV, Deep Insights, which enables upgrading and monitoring of
OpenTelemetry Collector
instances and the Red Hat build of OpenTelemetry Operator. - Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS.
-
Collect OpenShift Container Platform resource attributes via the
resourcedetection
processor. -
Support the
managed
andunmanaged
states in theOpenTelemetryCollector
custom resouce.
1.17.2. Known issues Copy linkLink copied to clipboard!
There is currently a known issue:
- Currently, you must manually set Operator maturity to Level IV, Deep Insights. (TRACING-3431)
1.18. Release notes for Red Hat build of OpenTelemetry 2.8 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.8 is based on the open source OpenTelemetry release 0.74.0.
1.18.1. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.19. Release notes for Red Hat build of OpenTelemetry 2.7 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.7 is based on the open source OpenTelemetry release 0.63.1.
1.19.1. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.20. Release notes for Red Hat build of OpenTelemetry 2.6 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.6 is based on the open source OpenTelemetry release 0.60.
1.20.1. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.21. Release notes for Red Hat build of OpenTelemetry 2.5 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.5 is based on the open source OpenTelemetry release 0.56.
1.21.1. New features and enhancements Copy linkLink copied to clipboard!
This update introduces the following enhancement:
- Support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator.
1.21.2. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.22. Release notes for Red Hat build of OpenTelemetry 2.4 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.4 is based on the open source OpenTelemetry release 0.49.
1.22.1. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.23. Release notes for Red Hat build of OpenTelemetry 2.3 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.3.1 is based on the open source OpenTelemetry release 0.44.1.
Red Hat build of OpenTelemetry 2.3.0 is based on the open source OpenTelemetry release 0.44.0.
1.23.1. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.24. Release notes for Red Hat build of OpenTelemetry 2.2 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.2 is based on the open source OpenTelemetry release 0.42.0.
1.24.1. Technology Preview features Copy linkLink copied to clipboard!
The unsupported OpenTelemetry Collector components included in the 2.1 release are removed.
1.24.2. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.25. Release notes for Red Hat build of OpenTelemetry 2.1 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.1 is based on the open source OpenTelemetry release 0.41.1.
1.25.1. Technology Preview features Copy linkLink copied to clipboard!
This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file
moves under tls
in the custom resource, as shown in the following examples.
CA file configuration for OpenTelemetry version 0.33
CA file configuration for OpenTelemetry version 0.41.1
1.25.2. Bug fixes Copy linkLink copied to clipboard!
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
1.26. Release notes for Red Hat build of OpenTelemetry 2.0 Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat build of OpenTelemetry 2.0 is based on the open source OpenTelemetry release 0.33.0.
This release adds the Red Hat build of OpenTelemetry as a Technology Preview, which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat build of OpenTelemetry. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling.
1.27. Getting support Copy linkLink copied to clipboard!
If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal. From the Customer Portal, you can:
- Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
- Submit a support case to Red Hat Support.
- Access other product documentation.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console. Insights provides details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.
Chapter 2. Installing Copy linkLink copied to clipboard!
Installing the Red Hat build of OpenTelemetry involves the following steps:
- Installing the Red Hat build of OpenTelemetry Operator.
- Creating a namespace for an OpenTelemetry Collector instance.
-
Creating an
OpenTelemetryCollector
custom resource to deploy the OpenTelemetry Collector instance.
2.1. Installing the Red Hat build of OpenTelemetry from the web console Copy linkLink copied to clipboard!
You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console.
Prerequisites
-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
Install the Red Hat build of OpenTelemetry Operator:
-
Go to Operators → OperatorHub and search for
Red Hat build of OpenTelemetry Operator
. Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat → Install → Install → View Operator.
ImportantThis installs the Operator with the default presets:
- Update channel → stable
- Installation mode → All namespaces on the cluster
- Installed Namespace → openshift-opentelemetry-operator
- Update approval → Automatic
- In the Details tab of the installed Operator page, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
-
Go to Operators → OperatorHub and search for
-
Create a permitted project of your choice for the OpenTelemetry Collector instance that you will create in the next step by going to Home → Projects → Create Project. Project names beginning with the
openshift-
prefix are not permitted. Create an OpenTelemetry Collector instance.
- Go to Operators → Installed Operators.
- Select OpenTelemetry Collector → Create OpenTelemetry Collector → YAML view.
In the YAML view, customize the
OpenTelemetryCollector
custom resource (CR):Example
OpenTelemetryCollector
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Select Create.
Verification
- Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance.
- Go to Operators → Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready.
- Go to Workloads → Pods to verify that all the component pods of the OpenTelemetry Collector instance are running.
2.2. Installing the Red Hat build of OpenTelemetry by using the CLI Copy linkLink copied to clipboard!
You can install the Red Hat build of OpenTelemetry from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Ensure that your OpenShift CLI (
Procedure
Install the Red Hat build of OpenTelemetry Operator:
Create a project for the Red Hat build of OpenTelemetry Operator by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Operator group by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subscription by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Operator status by running the following command:
oc get csv -n openshift-opentelemetry-operator
$ oc get csv -n openshift-opentelemetry-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a permitted project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step:
To create a permitted project without metadata, run the following command:
oc new-project <permitted_project_of_opentelemetry_collector_instance>
$ oc new-project <permitted_project_of_opentelemetry_collector_instance>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Project names beginning with the
openshift-
prefix are not permitted.
To create a permitted project with metadata, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Project names beginning with the
openshift-
prefix are not permitted.
Create an OpenTelemetry Collector instance in the project that you created for it.
NoteYou can create multiple OpenTelemetry Collector instances in separate projects on the same cluster.
Customize the
OpenTelemetryCollector
custom resource (CR):Example
OpenTelemetryCollector
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the customized CR by running the following command:
oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF
$ oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
status.phase
of the OpenTelemetry Collector pod isRunning
and theconditions
aretype: Ready
by running the following command:oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the OpenTelemetry Collector service by running the following command:
oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Using taints and tolerations Copy linkLink copied to clipboard!
To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes using nodeSelector and tolerations in OpenShift 4
2.4. Creating the required RBAC resources automatically Copy linkLink copied to clipboard!
Some Collector components require configuring the RBAC resources.
Procedure
Add the following permissions to the
opentelemetry-operator-controller-manage
service account so that the Red Hat build of OpenTelemetry Operator can create them automatically:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Configuring the Collector Copy linkLink copied to clipboard!
3.1. Configuring the Collector Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file.
3.1.1. OpenTelemetry Collector configuration options Copy linkLink copied to clipboard!
The OpenTelemetry Collector consists of five types of components that access telemetry data:
- Receivers
- Processors
- Exporters
- Connectors
- Extensions
You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service
section of the YAML file. As a best practice, only enable the components that you need.
Example of the OpenTelemetry Collector custom resource file
- 1
- If a component is configured but not defined in the
service
section, the component is not enabled.
Parameter | Description | Values | Default |
---|---|---|---|
receivers:
| A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. |
| None |
processors:
| Processors run through the received data before it is exported. By default, no processors are enabled. |
| None |
exporters:
| An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. |
| None |
connectors:
| Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. |
| None |
extensions:
| Optional components for tasks that do not involve processing telemetry data. |
| None |
service: pipelines:
|
Components are enabled by adding them to a pipeline under | ||
service: pipelines: traces: receivers:
|
You enable receivers for tracing by adding them under | None | |
service: pipelines: traces: processors:
|
You enable processors for tracing by adding them under | None | |
service: pipelines: traces: exporters:
|
You enable exporters for tracing by adding them under | None | |
service: pipelines: metrics: receivers:
|
You enable receivers for metrics by adding them under | None | |
service: pipelines: metrics: processors:
|
You enable processors for metircs by adding them under | None | |
service: pipelines: metrics: exporters:
|
You enable exporters for metrics by adding them under | None |
3.1.2. Creating the required RBAC resources automatically Copy linkLink copied to clipboard!
Some Collector components require configuring the RBAC resources.
Procedure
Add the following permissions to the
opentelemetry-operator-controller-manage
service account so that the Red Hat build of OpenTelemetry Operator can create them automatically:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Receivers Copy linkLink copied to clipboard!
Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources.
Currently, the following General Availability and Technology Preview receivers are available for the Red Hat build of OpenTelemetry:
3.2.1. OTLP Receiver Copy linkLink copied to clipboard!
The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP). The OTLP Receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with an enabled OTLP Receiver
- 1
- The OTLP gRPC endpoint. If omitted, the default
0.0.0.0:4317
is used. - 2
- The server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
- 3
- The path to the TLS certificate at which the server verifies a client certificate. This sets the value of
ClientCAs
andClientAuth
toRequireAndVerifyClientCert
in theTLSConfig
. For more information, see theConfig
of the Golang TLS package. - 4
- Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_interval
field accepts a string containing valid units of time such asns
,us
(orµs
),ms
,s
,m
,h
. - 5
- The OTLP HTTP endpoint. The default value is
0.0.0.0:4318
. - 6
- The server-side TLS configuration. For more information, see the
grpc
protocol configuration section.
3.2.2. Jaeger Receiver Copy linkLink copied to clipboard!
The Jaeger Receiver ingests traces in the Jaeger formats.
OpenTelemetry Collector custom resource with an enabled Jaeger Receiver
- 1
- The Jaeger gRPC endpoint. If omitted, the default
0.0.0.0:14250
is used. - 2
- The Jaeger Thrift HTTP endpoint. If omitted, the default
0.0.0.0:14268
is used. - 3
- The Jaeger Thrift Compact endpoint. If omitted, the default
0.0.0.0:6831
is used. - 4
- The Jaeger Thrift Binary endpoint. If omitted, the default
0.0.0.0:6832
is used. - 5
- The server-side TLS configuration. See the OTLP Receiver configuration section for more details.
3.2.3. Host Metrics Receiver Copy linkLink copied to clipboard!
The Host Metrics Receiver ingests metrics in the OTLP format.
OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver
- 1
- Sets the time interval for host metrics collection. If omitted, the default value is
1m
. - 2
- Sets the initial time delay for host metrics collection. If omitted, the default value is
1s
. - 3
- Configures the
root_path
so that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the sameroot_path
value for each instance. - 4
- Lists the enabled host metrics scrapers. Available scrapers are
cpu
,disk
,load
,filesystem
,memory
,network
,paging
,processes
, andprocess
.
3.2.4. Kubernetes Objects Receiver Copy linkLink copied to clipboard!
The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data.
The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver
- 1
- The Resource name that this receiver observes: for example,
pods
,deployments
, orevents
. - 2
- The observation mode that this receiver uses:
pull
orwatch
. - 3
- Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is
1h
. - 4
- The label selector to define targets.
- 5
- The field selector to filter targets.
- 6
- The list of namespaces to collect events from. If omitted, the default value is
all
.
3.2.5. Kubelet Stats Receiver Copy linkLink copied to clipboard!
The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet’s API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis.
OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver
- 1
- Sets the
K8S_NODE_NAME
to authenticate to the API.
The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector.
Permissions required by the service account
- 1
- The permissions required when using the
extra_metadata_labels
orrequest_utilization
orlimit_utilization
metrics.
3.2.6. Prometheus Receiver Copy linkLink copied to clipboard!
The Prometheus Receiver scrapes the metrics endpoints.
OpenTelemetry Collector custom resource with an enabled Prometheus Receiver
- 1
- Scrapes configurations using the Prometheus format.
- 2
- The Prometheus job name.
- 3
- The lnterval for scraping the metrics data. Accepts time units. The default value is
1m
. - 4
- The targets at which the metrics are exposed. This example scrapes the metrics from a
my-app
application in theexample
project.
3.2.7. OTLP JSON File Receiver Copy linkLink copied to clipboard!
The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification. The receiver watches a specified directory for changes such as created or modified files to process.
The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver
3.2.8. Zipkin Receiver Copy linkLink copied to clipboard!
The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats.
OpenTelemetry Collector custom resource with the enabled Zipkin Receiver
3.2.9. Kafka Receiver Copy linkLink copied to clipboard!
The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format.
The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Kafka Receiver
- 1
- The list of Kafka brokers. The default is
localhost:9092
. - 2
- The Kafka protocol version. For example,
2.0.0
. This is a required field. - 3
- The name of the Kafka topic to read from. The default is
otlp_spans
. - 4
- The plain text authentication configuration. If omitted, plain text authentication is disabled.
- 5
- The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
- 6
- Disables verifying the server’s certificate chain and host name. The default is
false
. - 7
- ServerName indicates the name of the server requested by the client to support virtual hosting.
3.2.10. Kubernetes Cluster Receiver Copy linkLink copied to clipboard!
The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts.
The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver
This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account.
ServiceAccount
object
RBAC rules for the ClusterRole
object
ClusterRoleBinding
object
3.2.11. OpenCensus Receiver Copy linkLink copied to clipboard!
The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json.
OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver
- 1
- The OpenCensus endpoint. If omitted, the default is
0.0.0.0:55678
. - 2
- The server-side TLS configuration. See the OTLP Receiver configuration section for more details.
- 3
- You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with
*
are accepted under thecors_allowed_origins
. To match any origin, enter only*
.
3.2.12. Filelog Receiver Copy linkLink copied to clipboard!
The Filelog Receiver tails and parses logs from files.
The Filelog Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Filelog Receiver that tails a text file
3.2.13. Journald Receiver Copy linkLink copied to clipboard!
The Journald Receiver parses journald events from the systemd journal and sends them as logs.
The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Journald Receiver
- 1
- Filters output by message priorities or priority ranges. The default value is
info
. - 2
- Lists the units to read entries from. If empty, entries are read from all units.
- 3
- Includes very long logs and logs with unprintable characters. The default value is
false
. - 4
- If set to
true
, the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value isfalse
. - 5
- The time interval to wait after the first failure before retrying. The default value is
1s
. The units arems
,s
,m
,h
. - 6
- The upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is
30s
. The supported units arems
,s
,m
,h
. - 7
- The maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is
0
, retrying never stops. The default value is5m
. The supported units arems
,s
,m
,h
.
3.2.14. Kubernetes Events Receiver Copy linkLink copied to clipboard!
The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs.
The Kubernetes Events Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenShift Container Platform permissions required for the Kubernetes Events Receiver
OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver
3.3. Processors Copy linkLink copied to clipboard!
Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.
Currently, the following General Availability and Technology Preview processors are available for the Red Hat build of OpenTelemetry:
3.3.1. Batch Processor Copy linkLink copied to clipboard!
The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information.
Example of the OpenTelemetry Collector custom resource when using the Batch Processor
Parameter | Description | Default |
---|---|---|
| Sends the batch after a specific time duration and irrespective of the batch size. |
|
| Sends the batch of telemetry data after the specified number of spans or metrics. |
|
|
The maximum allowable size of the batch. Must be equal or greater than the |
|
|
When activated, a batcher instance is created for each unique set of values found in the |
|
|
When the |
|
3.3.2. Memory Limiter Processor Copy linkLink copied to clipboard!
The Memory Limiter Processor periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run.
Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor
Parameter | Description | Default |
---|---|---|
|
Time between memory usage measurements. The optimal value is |
|
| The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. |
|
|
Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of |
20% of |
|
Same as the |
|
|
Same as the |
|
3.3.3. Resource Detection Processor Copy linkLink copied to clipboard!
The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry’s resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES
environment variable detector.
The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenShift Container Platform permissions required for the Resource Detection Processor
OpenTelemetry Collector using the Resource Detection Processor
OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector
- 1
- Specifies which detector to use. In this example, the environment detector is specified.
3.3.4. Attributes Processor Copy linkLink copied to clipboard!
The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions.
This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported:
- Insert
- Inserts a new attribute into the input data when the specified key does not already exist.
- Update
- Updates an attribute in the input data if the key already exists.
- Upsert
- Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists.
- Delete
- Removes an attribute from the input data.
- Hash
- Hashes an existing attribute value as SHA1.
- Extract
-
Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor’s
to_attributes
setting with the existing attribute as the source. - Convert
- Converts an existing attribute to a specified type.
OpenTelemetry Collector using the Attributes Processor
3.3.5. Resource Processor Copy linkLink copied to clipboard!
The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs.
OpenTelemetry Collector using the Resource Detection Processor
Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute.
3.3.6. Span Processor Copy linkLink copied to clipboard!
The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces.
Span renaming requires specifying attributes for the new name by using the from_attributes
configuration.
The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector using the Span Processor for renaming a span
You can use this processor to extract attributes from the span name.
OpenTelemetry Collector using the Span Processor for extracting attributes from a span name
- 1
- This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a
documentID
attibute is created. In this example, if the input span name is/api/v1/document/12345678/update
, this results in the/api/v1/document/{documentId}/update
output span name, and a new"documentId"="12345678"
attribute is added to the span.
You can have the span status modified.
OpenTelemetry Collector using the Span Processor for status change
3.3.7. Kubernetes Attributes Processor Copy linkLink copied to clipboard!
The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata.
Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor
OpenTelemetry Collector using the Kubernetes Attributes Processor
3.3.8. Filter Processor Copy linkLink copied to clipboard!
The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs.
The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with an enabled OTLP Exporter
- 1
- Defines the error mode. When set to
ignore
, ignores errors returned by conditions. When set topropagate
, returns the error up the pipeline. An error causes the payload to be dropped from the Collector. - 2
- Filters the spans that have the
container.name == app_container_1
attribute. - 3
- Filters the spans that have the
host.name == localhost
resource attribute.
3.3.9. Routing Processor Copy linkLink copied to clipboard!
The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value.
The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with an enabled OTLP Exporter
Optionally, you can create an attribute_source
configuration, which defines where to look for the attribute that you specify in the from_attribute
field. The supported values are context
for searching the context including the HTTP headers, and resource
for searching the resource attributes.
3.3.10. Cumulative-to-Delta Processor Copy linkLink copied to clipboard!
The Cumulative-to-Delta Processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics.
You can filter metrics by using the include:
or exclude:
fields and specifying the strict
or regexp
metric name matching.
Because this processor calculates delta by storing the previous value of a metric, you must set up the metric source to send the metric data to a single stateful Collector instance rather than a deployment of multiple Collectors.
This processor does not convert non-monotonic sums and exponential histograms.
The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor
- 1
- To tie the Collector’s lifecycle to the metric source, you can run the Collector as a sidecar to the application that emits the cumulative temporality metrics.
- 2
- Optional: You can limit which metrics the processor converts by explicitly defining which metrics you want converted in this stanza. If you omit this field, the processor converts all metrics, except the metrics that are listed in the
exclude
field. - 3
- Defines the value that you provided in the
metrics
field as an exact match by using thestrict
parameter or a regular expression by using theregex
parameter. - 4
- Lists the names of the metrics that you want to convert. The processor converts exact matches or matches for regular expressions. If a metric matches both the
include
andexclude
filters, theexclude
filter takes precedence. - 5
- Optional: You can exclude certain metrics from conversion by explicitly defining them here.
3.3.11. Group-by-Attributes Processor Copy linkLink copied to clipboard!
The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes.
The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example:
Example of the OpenTelemetry Collector custom resource when using the Group-by-Attributes Processor
- 1
- Specifies attribute keys to group by.
- 2
- If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated.
3.3.12. Transform Processor Copy linkLink copied to clipboard!
The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL). For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed.
All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context
type specifies which OTTL Context the processor must use when interpreting the associated statements.
The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Configuration summary
Example of the OpenTelemetry Collector custom resource when using the Transform Processor
Signal Statement | Valid Contexts |
---|---|
|
|
|
|
|
|
Value | Description |
---|---|
| Ignores and logs errors returned by statements and then continues to the next statement. |
| Ignores and doesn’t log errors returned by statements and then continues to the next statement. |
| Returns errors up the pipeline and drops the payload. Implicit default. |
3.3.13. Tail Sampling Processor Copy linkLink copied to clipboard!
The Tail Sampling Processor samples traces according to user-defined policies when all of the spans are completed. Tail-based sampling enables you to filter the traces of interest and reduce your data ingestion and storage costs.
The Tail Sampling Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This processor reassembles spans into new batches and strips spans of their original context.
- In pipelines, place this processor downstream of any processors that rely on context: for example, after the Kubernetes Attributes Processor.
- If scaling the Collector, ensure that one Collector instance receives all spans of the same trace so that this processor makes correct sampling decisions based on the specified sampling policies. You can achieve this by setting up two layers of Collectors: the first layer of Collectors with the Load Balancing Exporter, and the second layer of Collectors with the Tail Sampling Processor.
Example of the OpenTelemetry Collector custom resource when using the Tail Sampling Processor
- 1
- Processor name.
- 2
- Optional: Decision delay time, counted from the time of the first span, before the processor makes a sampling decision on each trace. Defaults to
30s
. - 3
- Optional: The number of traces kept in memory. Defaults to
50000
. - 4
- Optional: The expected number of new traces per second, which is helpful for allocating data structures. Defaults to
0
. - 5
- Definitions of the policies for trace evaluation. The processor evaluates each trace against all of the specified policies and then either samples or drops the trace.
You can choose and combine policies from the following list:
The following policy samples all traces:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy samples only traces of a duration that is within a specified range:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The provided
5000
and10000
values are examples. You can estimate the desired latency values by looking at the earliest start time value and latest end time value. If you omit theupper_threshold_ms
field, this policy samples all latencies greater than the specifiedthreshold_ms
value.
The following policy samples traces by numeric value matches for resource and record attributes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The provided
50
and100
values are examples.
The following policy samples only a percentage of traces:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The provided
10
value is an example.
The following policy samples traces by the status code:
OK
,ERROR
, orUNSET
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy samples traces by string value matches for resource and record attributes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This policy definition supports both exact and regular-expression value matches. The provided
10
value in thecache_max_size
field is an example.
The following policy samples traces by the rate of spans per second:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The provided
35
value is an example.
The following policy samples traces by the minimum and maximum number of spans inclusively:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the sum of all spans in the trace is outside the range threshold, the trace is not sampled. The provided
2
and20
values are examples.
The following policy samples traces by
TraceState
value matches:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy samples traces by a boolean attribute (resource and record):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy samples traces by a given boolean OTTL condition for a span or span event:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following is an
AND
policy that samples traces based on a combination of multiple policies:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The provided
50
and100
values are examples.
The following is a
DROP
policy that drops traces from sampling based on a combination of multiple policies:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy samples traces by a combination of the previous samplers and with ordering and rate allocation per sampler:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 2
- Allocates percentages of spans according to the order of applied policies. For example, if you set the
100
value in themax_total_spans_per_second
field, you can set the following values in therate_allocation
section: the50
percent value in thepolicy: <composite_policy_1>
section to allocate 50 spans per second, and the25
percent value in thepolicy: <composite_policy_2>
section to allocate 25 spans per second. To fill the remaining capacity, you can set thealways_sample
value in thetype
field of thename: <composite_policy_3>
section.
3.4. Exporters Copy linkLink copied to clipboard!
Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.
Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry:
3.4.1. OTLP Exporter Copy linkLink copied to clipboard!
The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with the enabled OTLP Exporter
- 1
- The OTLP gRPC endpoint. If the
https://
scheme is used, then client transport security is enabled and overrides theinsecure
setting in thetls
. - 2
- The client-side TLS configuration. Defines paths to TLS certificates.
- 3
- Disables client transport security when set to
true
. The default value isfalse
by default. - 4
- Skips verifying the certificate when set to
true
. The default value isfalse
. - 5
- Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_interval
accepts a string containing valid units of time such asns
,us
(orµs
),ms
,s
,m
,h
. - 6
- Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing.
- 7
- Headers are sent for every request performed during an established connection.
3.4.2. OTLP HTTP Exporter Copy linkLink copied to clipboard!
The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with the enabled OTLP Exporter
- 1
- The OTLP HTTP endpoint. If the
https://
scheme is used, then client transport security is enabled and overrides theinsecure
setting in thetls
. - 2
- The client side TLS configuration. Defines paths to TLS certificates.
- 3
- Headers are sent in every HTTP request.
- 4
- If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request.
3.4.3. Debug Exporter Copy linkLink copied to clipboard!
The Debug Exporter prints traces and metrics to the standard output.
OpenTelemetry Collector custom resource with the enabled Debug Exporter
- 1
- Verbosity of the debug export:
detailed
,normal
, orbasic
. When set todetailed
, pipeline data are verbosely logged. Defaults tonormal
. - 2
- Initial number of messages logged per second. The default value is
2
messages per second. - 3
- Sampling rate after the initial number of messages, the value in
sampling_initial
, has been logged. Disabled by default with the default1
value. Sampling is enabled with values greater than1
. For more information, see the page for the sampler function in thezapcore
package on the Go Project’s website. - 4
- When set to
true
, enables output from the Collector’s internal logger for the exporter.
3.4.4. Load Balancing Exporter Copy linkLink copied to clipboard!
The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key
configuration.
The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter
- 1
- The
routing_key: service
exports spans for the same service name to the same Collector instance to provide accurate aggregation. Therouting_key: traceID
exports spans based on theirtraceID
. The implicit default istraceID
based routing. - 2
- The OTLP is the only supported load-balancing protocol. All options of the OTLP exporter are supported.
- 3
- You can configure only one resolver.
- 4
- The static resolver distributes the load across the listed endpoints.
- 5
- You can use the DNS resolver only with a Kubernetes headless service.
- 6
- The Kubernetes resolver is recommended.
3.4.5. Prometheus Exporter Copy linkLink copied to clipboard!
The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats.
The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Prometheus Exporter
- 1
- The network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the
endpoint
field to the<instance_name>-collector
service. - 2
- The server-side TLS configuration. Defines paths to TLS certificates.
- 3
- If set, exports metrics under the provided value.
- 4
- Key-value pair labels that are applied for every exported metric.
- 5
- If
true
, metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such ascounter
. Disabled by default. - 6
- If
enabled
istrue
, all the resource attributes are converted to metric labels. Disabled by default. - 7
- Defines how long metrics are exposed without updates. The default is
5m
. - 8
- Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is
true
.
When the spec.observability.metrics.enableMetrics
field in the OpenTelemetryCollector
custom resource (CR) is set to true
, the OpenTelemetryCollector
CR automatically creates a Prometheus ServiceMonitor
or PodMonitor
CR to enable Prometheus to scrape your metrics.
3.4.6. Prometheus Remote Write Exporter Copy linkLink copied to clipboard!
The Prometheus Remote Write Exporter exports metrics to compatible back ends.
The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter
- 1
- Endpoint for sending the metrics.
- 2
- Server-side TLS configuration. Defines paths to TLS certificates.
- 3
- When set to
true
, creates atarget_info
metric for each resource metric. - 4
- When set to
true
, exports a_created
metric for the Summary, Histogram, and Monotonic Sum metric points. - 5
- Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is
3000000
, which is approximately 2.861 megabytes.
- This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics.
-
You must enable the
--web.enable-remote-write-receiver
feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails.
3.4.7. Kafka Exporter Copy linkLink copied to clipboard!
The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency.
OpenTelemetry Collector custom resource with the enabled Kafka Exporter
- 1
- The list of Kafka brokers. The default is
localhost:9092
. - 2
- The Kafka protocol version. For example,
2.0.0
. This is a required field. - 3
- The name of the Kafka topic to read from. The following are the defaults:
otlp_spans
for traces,otlp_metrics
for metrics,otlp_logs
for logs. - 4
- The plain text authentication configuration. If omitted, plain text authentication is disabled.
- 5
- The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
- 6
- Disables verifying the server’s certificate chain and host name. The default is
false
. - 7
- ServerName indicates the name of the server requested by the client to support virtual hosting.
3.4.8. AWS CloudWatch Logs Exporter Copy linkLink copied to clipboard!
The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain.
The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter
- 1
- Required. If the log group does not exist yet, it is automatically created.
- 2
- Required. If the log stream does not exist yet, it is automatically created.
- 3
- Optional. If the AWS region is not already set in the default credential chain, you must specify it.
- 4
- Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://
, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). - 5
- Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to
0
, the logs never expire by default. Supported values for retention in days are1
,3
,5
,7
,14
,30
,60
,90
,120
,150
,180
,365
,400
,545
,731
,1827
,2192
,2557
,2922
,3288
, or3653
.
3.4.9. AWS EMF Exporter Copy linkLink copied to clipboard!
The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF):
-
Int64DataPoints
-
DoubleDataPoints
-
SummaryDataPoints
The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents
API.
One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter
- 1
- Customized log group name.
- 2
- Customized log stream name.
- 3
- Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default.
- 4
- The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region.
- 5
- Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://
, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). - 6
- Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to
0
, the logs never expire by default. Supported values for retention in days are1
,3
,5
,7
,14
,30
,60
,90
,120
,150
,180
,365
,400
,545
,731
,1827
,2192
,2557
,2922
,3288
, or3653
. - 7
- Optional. A custom namespace for the Amazon CloudWatch metrics.
Log group name
The log_group_name
parameter allows you to customize the log group name and supports the default /metrics/default
value or the following placeholders:
/aws/metrics/{ClusterName}
-
This placeholder is used to search for the
ClusterName
oraws.ecs.cluster.name
resource attribute in the metrics data and replace it with the actual cluster name. {NodeName}
-
This placeholder is used to search for the
NodeName
ork8s.node.name
resource attribute. {TaskId}
-
This placeholder is used to search for the
TaskId
oraws.ecs.task.id
resource attribute.
If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined
value.
Log stream name
The log_stream_name
parameter allows you to customize the log stream name and supports the default otel-stream
value or the following placeholders:
{ClusterName}
-
This placeholder is used to search for the
ClusterName
oraws.ecs.cluster.name
resource attribute. {ContainerInstanceId}
-
This placeholder is used to search for the
ContainerInstanceId
oraws.ecs.container.instance.id
resource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type. {NodeName}
-
This placeholder is used to search for the
NodeName
ork8s.node.name
resource attribute. {TaskDefinitionFamily}
-
This placeholder is used to search for the
TaskDefinitionFamily
oraws.ecs.task.family
resource attribute. {TaskId}
-
This placeholder is used to search for the
TaskId
oraws.ecs.task.id
resource attribute in the metrics data and replace it with the actual task ID.
If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined
value.
3.4.10. AWS X-Ray Exporter Copy linkLink copied to clipboard!
The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments
API and signs requests by using the AWS SDK for Go and the default credential provider chain.
The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter
- 1
- The destination region for the X-Ray segments sent to the AWS X-Ray service. For example,
eu-west-1
. - 2
- Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://
, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). - 3
- The Amazon Resource Name (ARN) of the AWS resource that is running the Collector.
- 4
- The AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account.
- 5
- The list of attribute names to be converted to X-Ray annotations.
- 6
- The list of log group names for Amazon CloudWatch Logs.
- 7
- Time duration in seconds before timing out a request. If omitted, the default value is
30
.
3.4.11. File Exporter Copy linkLink copied to clipboard!
The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files. With this exporter, you can also use a resource attribute to control file naming. The only required setting is path
, which specifies the destination path for telemetry files in the persistent-volume file system.
The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled File Exporter
- 1
- The file-system path where the data is to be written. There is no default.
- 2
- File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the
rotation
setting to enable file rotation. - 3
- The
max_megabytes
setting is the maximum size a file is allowed to reach until it is rotated. The default is100
. - 4
- The
max_days
setting is for how many days a file is to be retained, counting from the timestamp in the file name. There is no default. - 5
- The
max_backups
setting is for retaining several older files. The defalt is100
. - 6
- The
localtime
setting specifies the local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC). - 7
- The format for encoding the telemetry data before writing it to a file. The default format is
json
. Theproto
format is also supported. - 8
- File compression is optional and not set by default. This setting defines the compression algorithm for the data that is exported to a file. Currently, only the
zstd
compression algorithm is supported. There is no default. - 9
- The time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the
rotation
settings.
3.5. Connectors Copy linkLink copied to clipboard!
A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.
Currently, the following General Availability and Technology Preview connectors are available for the Red Hat build of OpenTelemetry:
3.5.1. Count Connector Copy linkLink copied to clipboard!
The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines.
The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following are the default metric names:
-
trace.span.count
-
trace.span.event.count
-
metric.count
-
metric.datapoint.count
-
log.record.count
You can also expose custom metric names.
OpenTelemetry Collector custom resource (CR) with an enabled Count Connector
- 1
- It is important to correctly configure the Count Connector as an exporter or receiver in the pipeline and to export the generated metrics to the correct exporter.
- 2
- The Count Connector is configured to receive spans as an exporter.
- 3
- The Count Connector is configured to emit generated metrics as a receiver.Tip
If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data.
The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans
, spanevents
, metrics
, datapoints
, or logs
. See the next example.
Example OpenTelemetry Collector CR for the Count Connector to count spans by conditions
- 1
- In this example, the exposed metric counts spans with the specified conditions.
- 2
- You can specify a custom metric name such as
cluster.prod.event.count
.TipWrite conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors.
The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans
, spanevents
, metrics
, datapoints
, or logs
. See the next example. The attribute keys are injected into the telemetry data. You must define a value for the default_value
field for missing attributes.
Example OpenTelemetry Collector CR for the Count Connector to count logs by attributes
3.5.2. Routing Connector Copy linkLink copied to clipboard!
The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements.
The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with an enabled Routing Connector
- 1
- Connector routing table.
- 2
- Routing conditions written as OTTL statements.
- 3
- Destination pipelines for routing the matching telemetry data.
- 4
- Destination pipelines for routing the telemetry data for which no routing condition is satisfied.
- 5
- Error-handling mode: The
propagate
value is for logging an error and dropping the payload. Theignore
value is for ignoring the condition and attempting to match with the next one. Thesilent
value is the same asignore
but without logging the error. The default ispropagate
. - 6
- When set to
true
, the payload is routed only to the first pipeline whose routing condition is met. The default isfalse
.
3.5.3. Forward Connector Copy linkLink copied to clipboard!
The Forward Connector merges two pipelines of the same type.
The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with an enabled Forward Connector
3.5.4. Spanmetrics Connector Copy linkLink copied to clipboard!
The Spanmetrics Connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.
OpenTelemetry Collector custom resource with an enabled Spanmetrics Connector
- 1
- Defines the flush interval of the generated metrics. Defaults to
15s
.
3.6. Extensions Copy linkLink copied to clipboard!
Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically.
Currently, the following General Availability and Technology Preview extensions are available for the Red Hat build of OpenTelemetry:
3.6.1. BearerTokenAuth Extension Copy linkLink copied to clipboard!
The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs.
OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension
- 1
- You can configure the BearerTokenAuth Extension to send a custom
scheme
. The default isBearer
. - 2
- You can add the BearerTokenAuth Extension token as metadata to identify a message.
- 3
- Path to a file that contains an authorization token that is transmitted with every message.
- 4
- You can assign the authenticator configuration to an OTLP Receiver.
- 5
- You can assign the authenticator configuration to an OTLP Exporter.
3.6.2. OAuth2Client Extension Copy linkLink copied to clipboard!
The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs.
The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension
- 1
- Client identifier, which is provided by the identity provider.
- 2
- Confidential key used to authenticate the client to the identity provider.
- 3
- Further metadata, in the key-value pair format, which is transferred during authentication. For example,
audience
specifies the intended audience for the access token, indicating the recipient of the token. - 4
- The URL of the OAuth2 token endpoint, where the Collector requests access tokens.
- 5
- The scopes define the specific permissions or access levels requested by the client.
- 6
- The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens.
- 7
- When set to
true
, configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. - 8
- The path to a Certificate Authority (CA) file that is used to verify the server’s certificate during the TLS handshake.
- 9
- The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required.
- 10
- The path to the client’s private key file that is used with the client certificate if needed for authentication.
- 11
- Sets a timeout for the token client’s request.
- 12
- You can assign the authenticator configuration to an OTLP exporter.
3.6.3. File Storage Extension Copy linkLink copied to clipboard!
The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols. This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist.
The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue
- 1
- Specifies the directory in which the telemetry data is stored.
- 2
- Specifies the timeout time interval for opening the stored files.
- 3
- Starts compaction when the Collector starts. If omitted, the default is
false
. - 4
- Specifies the directory in which the compactor stores the telemetry data.
- 5
- Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is
65536
bytes. - 6
- When set, forces the database to perform an
fsync
call after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance. - 7
- Buffers the OTLP Exporter data on the local file system.
- 8
- Starts the File Storage Extension by the Collector.
3.6.4. OIDC Auth Extension Copy linkLink copied to clipboard!
The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request.
The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the configured OIDC Auth Extension
3.6.5. Jaeger Remote Sampling Extension Copy linkLink copied to clipboard!
The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger’s remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system.
The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension
Example of a Jaeger Remote Sampling strategy file
3.6.6. Performance Profiler Extension Copy linkLink copied to clipboard!
The Performance Profiler Extension enables the Go net/http/pprof
endpoint. Developers use this extension to collect performance profiles and investigate issues with the service.
The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the configured Performance Profiler Extension
- 1
- The endpoint at which this extension listens. Use
localhost:
to make it available only locally or":"
to make it available on all network interfaces. The default value islocalhost:1777
. - 2
- Sets a fraction of blocking events to be profiled. To disable profiling, set this to
0
or a negative integer. See the documentation for theruntime
package. The default value is0
. - 3
- Set a fraction of mutex contention events to be profiled. To disable profiling, set this to
0
or a negative integer. See the documentation for theruntime
package. The default value is0
. - 4
- The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated.
3.6.7. Health Check Extension Copy linkLink copied to clipboard!
The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift.
The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the configured Health Check Extension
- 1
- The target IP address for publishing the health check status. The default is
0.0.0.0:13133
. - 2
- The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
- 3
- The path for the health check server. The default is
/
. - 4
- Settings for the Collector pipeline health check.
- 5
- Enables the Collector pipeline health check. The default is
false
. - 6
- The time interval for checking the number of failures. The default is
5m
. - 7
- The threshold of multiple failures until which a container is still marked as healthy. The default is
5
.
3.6.8. zPages Extension Copy linkLink copied to clipboard!
The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time. You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint.
The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the configured zPages Extension
- 1
- Specifies the HTTP endpoint for serving the zPages extension. The default is
localhost:55679
.
Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route.
You can enable port-forwarding by running the following oc
command:
oc port-forward pod/$(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679
$ oc port-forward pod/$(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679
The Collector provides the following zPages for diagnostics:
- ServiceZ
-
Shows an overview of the Collector services and links to the following zPages: PipelineZ, ExtensionZ, and FeatureZ. This page also displays information about the build version and runtime. An example of this page’s URL is
http://localhost:55679/debug/servicez
. - PipelineZ
-
Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page’s URL is
http://localhost:55679/debug/pipelinez
. - ExtensionZ
-
Shows the currently active extensions in the Collector. An example of this page’s URL is
http://localhost:55679/debug/extensionz
. - FeatureZ
-
Shows the feature gates enabled in the Collector along with their status and description. An example of this page’s URL is
http://localhost:55679/debug/featurez
. - TraceZ
-
Shows spans categorized by latency. Available time ranges include 0 µs, 10 µs, 100 µs, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page’s URL is
http://localhost:55679/debug/tracez
.
3.7. Target Allocator Copy linkLink copied to clipboard!
The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor
and ServiceMonitor
custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config
field to the enabled prometheus
receiver that connects to the Target Allocator service.
The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Example OpenTelemetryCollector CR with the enabled Target Allocator
- 1
- When the Target Allocator is enabled, the deployment mode must be set to
statefulset
. - 2
- Enables the Target Allocator. Defaults to
false
. - 3
- The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the
ServiceMonitor
,PodMonitor
custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is<collector_name>-targetallocator
. - 4
- Enables integration with the Prometheus
PodMonitor
andServiceMonitor
custom resources. - 5
- Label selector for the Prometheus
ServiceMonitor
custom resources. When left empty, enables all service monitors. - 6
- Label selector for the Prometheus
PodMonitor
custom resources. When left empty, enables all pod monitors. - 7
- Prometheus receiver with the minimal, empty
scrape_config: []
configuration option.
The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration.
RBAC configuration for the Target Allocator service account
Chapter 4. Configuring the instrumentation Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry Operator uses an Instrumentation
custom resource that defines the configuration of the instrumentation.
4.1. Auto-instrumentation in the Red Hat build of OpenTelemetry Operator Copy linkLink copied to clipboard!
Auto-instrumentation in the Red Hat build of OpenTelemetry Operator can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase.
Auto-instrumentation runs as follows:
- The Red Hat build of OpenTelemetry Operator injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application.
- The Red Hat build of OpenTelemetry Operator sets the required environment variables in the application’s runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend.
- The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified.
- Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing.
Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation.
4.2. OpenTelemetry instrumentation configuration options Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry injects and configures the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the Red Hat build of OpenTelemetry supports injecting instrumentation libraries for Go, Java, Node.js, Python, .NET, and the Apache HTTP Server (httpd
).
The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images.
4.2.1. Instrumentation options Copy linkLink copied to clipboard!
Instrumentation options are specified in an Instrumentation
custom resource (CR).
Sample Instrumentation
CR
- 1
- Python auto-instrumentation uses protocol buffers over HTTP (HTTP/proto or HTTP/protobuf) by default.
- 2
- Required if endpoint is set to
:4317
. - 3
- .NET auto-instrumentation uses protocol buffers over HTTP (HTTP/proto or HTTP/protobuf) by default.
- 4
- Required if endpoint is set to
:4317
. - 5
- Go auto-instrumentation uses protocol buffers over HTTP (HTTP/proto or HTTP/protobuf) by default.
- 6
- Required if endpoint is set to
:4317
.For more information about procol buffers, see Overview (Protocol Buffers Documentation).
Parameter | Description | Values |
---|---|---|
| Definition of common environment variables for all instrumentation types. | |
| Exporter configuration. | |
| Propagators defines inter-process context propagation configuration. |
|
| Resource attributes configuration. | |
| Sampling configuration. | |
| Configuration for the Apache HTTP Server instrumentation. | |
| Configuration for the .NET instrumentation. | |
| Configuration for the Go instrumentation. | |
| Configuration for the Java instrumentation. | |
| Configuration for the Node.js instrumentation. | |
| Configuration for the Python instrumentation. | Depending on the programming language, environment variables might not work for configuring telemetry. For the SDKs that do not support environment variable configuration, you must add a similar configuration directly in the code. For more information, see Environment Variable Specification (OpenTelemetry Documentation). |
Auto-instrumentation | Default protocol |
---|---|
Java 1.x |
|
Java 2.x |
|
Python |
|
.NET |
|
Go |
|
Apache HTTP Server |
|
4.2.2. Configuration of the OpenTelemetry SDK variables Copy linkLink copied to clipboard!
You can use the instrumentation.opentelemetry.io/inject-sdk
annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation
CR, into your pod:
-
OTEL_SERVICE_NAME
-
OTEL_TRACES_SAMPLER
-
OTEL_TRACES_SAMPLER_ARG
-
OTEL_PROPAGATORS
-
OTEL_RESOURCE_ATTRIBUTES
-
OTEL_EXPORTER_OTLP_ENDPOINT
-
OTEL_EXPORTER_OTLP_CERTIFICATE
-
OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE
-
OTEL_EXPORTER_OTLP_CLIENT_KEY
Value | Description |
---|---|
|
Injects the |
|
Injects no |
|
Specifies the name of the |
|
Specifies the name of the |
4.2.3. Exporter configuration Copy linkLink copied to clipboard!
Although the Instrumentation
custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector.
Sample exporter TLS CA configuration using a config map
- 1
- Specifies the OTLP endpoint using the HTTPS scheme and TLS.
- 2
- Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation.
- 3
- Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system.
Sample exporter mTLS configuration using a Secret
- 1
- Specifies the OTLP endpoint using the HTTPS scheme and TLS.
- 2
- Specifies the name of the Secret for the
ca_file
,cert_file
, andkey_file
values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation. - 3
- Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system.
- 4
- Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system.
- 5
- Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system.
You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret.
Example configuration for CA bundle injection by using a config map and Instrumentation
CR
4.2.4. Configuration of the Apache HTTP Server auto-instrumentation Copy linkLink copied to clipboard!
The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Name | Description | Default |
---|---|---|
| Attributes specific to the Apache HTTP Server. | |
| Location of the Apache HTTP Server configuration. |
|
| Environment variables specific to the Apache HTTP Server. | |
| Container image with the Apache SDK and auto-instrumentation. | |
| The compute resource requirements. | |
| Apache HTTP Server version. |
|
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
4.2.5. Configuration of the .NET auto-instrumentation Copy linkLink copied to clipboard!
The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, this feature injects unsupported, upstream instrumentation libraries.
Name | Description |
---|---|
| Environment variables specific to .NET. |
| Container image with the .NET SDK and auto-instrumentation. |
| The compute resource requirements. |
For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT
environment variable must be set if the endpoint of the exporters is set to 4317
. The .NET autoinstrumentation uses http/proto
by default, and the telemetry data must be set to the 4318
port.
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-dotnet: "true"
instrumentation.opentelemetry.io/inject-dotnet: "true"
4.2.6. Configuration of the Go auto-instrumentation Copy linkLink copied to clipboard!
The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, this feature injects unsupported, upstream instrumentation libraries.
Name | Description |
---|---|
| Environment variables specific to Go. |
| Container image with the Go SDK and auto-instrumentation. |
| The compute resource requirements. |
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-go: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/<path>/<to>/<container>/<executable>"
instrumentation.opentelemetry.io/inject-go: "true"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/<path>/<to>/<container>/<executable>"
- 1
- Sets the value for the required
OTEL_GO_AUTO_TARGET_EXE
environment variable.
Permissions required for the Go auto-instrumentation in the OpenShift cluster
The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows:
oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>
$ oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>
4.2.7. Configuration of the Java auto-instrumentation Copy linkLink copied to clipboard!
The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, this feature injects unsupported, upstream instrumentation libraries.
Name | Description |
---|---|
| Environment variables specific to Java. |
| Container image with the Java SDK and auto-instrumentation. |
| The compute resource requirements. |
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/inject-java: "true"
4.2.8. Configuration of the Node.js auto-instrumentation Copy linkLink copied to clipboard!
The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, this feature injects unsupported, upstream instrumentation libraries.
Name | Description |
---|---|
| Environment variables specific to Node.js. |
| Container image with the Node.js SDK and auto-instrumentation. |
| The compute resource requirements. |
The PodSpec
annotations to enable injection
instrumentation.opentelemetry.io/inject-nodejs: "true"
instrumentation.opentelemetry.io/inject-nodejs: "true"
4.2.9. Configuration of the Python auto-instrumentation Copy linkLink copied to clipboard!
The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, this feature injects unsupported, upstream instrumentation libraries.
Name | Description |
---|---|
| Environment variables specific to Python. |
| Container image with the Python SDK and auto-instrumentation. |
| The compute resource requirements. |
For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT
environment variable must be set if the endpoint of the exporters is set to 4317
. Python auto-instrumentation uses http/proto
by default, and the telemetry data must be set to the 4318
port.
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-python: "true"
instrumentation.opentelemetry.io/inject-python: "true"
4.2.10. Multi-container pods Copy linkLink copied to clipboard!
The instrumentation is injected to the first container that is available by default according to the pod specification. You can also specify the target container names for injection.
Pod annotation
instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>"
instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>"
- 1
- Use this annotation when you want to inject a single instrumentation in multiple containers.
The Go auto-instrumentation does not support multi-container auto-instrumentation injection.
4.2.11. Multi-container pods with multiple instrumentations Copy linkLink copied to clipboard!
Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation:
instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>"
instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>"
- 1
- You can inject instrumentation for only one language per container. For the list of supported
<application_language>
values, see the following table.
Language | Value for <application_language> |
---|---|
ApacheHTTPD |
|
DotNet |
|
Java |
|
NGINX |
|
NodeJS |
|
Python |
|
SDK |
|
4.2.12. Using the instrumentation CR with Service Mesh Copy linkLink copied to clipboard!
When using the Instrumentation
custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi
propagator.
Chapter 5. Sending traces, logs, and metrics to the OpenTelemetry Collector Copy linkLink copied to clipboard!
You can set up and use the Red Hat build of OpenTelemetry to send traces, logs, and metrics to the OpenTelemetry Collector or the TempoStack
instance.
Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.
5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection Copy linkLink copied to clipboard!
You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection.
The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector.
Prerequisites
- The Red Hat OpenShift Distributed Tracing Platform is installed, and a TempoStack instance is deployed.
You have access to the cluster through the web console or the OpenShift CLI (
oc
):-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role. -
For Red Hat OpenShift Dedicated, you must have an account with the
dedicated-admin
role.
-
You are logged in to the web console as a cluster administrator with the
Procedure
Create a project for an OpenTelemetry Collector instance.
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant the permissions to the service account for the
k8sattributes
andresourcedetection
processors.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the OpenTelemetry Collector as a sidecar.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This points to the Gateway of the TempoStack instance deployed by using the
<example>
Tempo Operator.
-
Create your deployment using the
otel-collector-sidecar
service account. -
Add the
sidecar.opentelemetry.io/inject: "true"
annotation to yourDeployment
object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance.
5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection Copy linkLink copied to clipboard!
You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables.
Prerequisites
- The Red Hat OpenShift Distributed Tracing Platform is installed, and a TempoStack instance is deployed.
You have access to the cluster through the web console or the OpenShift CLI (
oc
):-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role. -
For Red Hat OpenShift Dedicated, you must have an account with the
dedicated-admin
role.
-
You are logged in to the web console as a cluster administrator with the
Procedure
Create a project for an OpenTelemetry Collector instance.
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant the permissions to the service account for the
k8sattributes
andresourcedetection
processors.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the OpenTelemetry Collector instance with the
OpenTelemetryCollector
custom resource.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This points to the Gateway of the TempoStack instance deployed by using the
<example>
Tempo Operator.
Set the environment variables in the container with your instrumented application.
Expand Name Description Default value OTEL_SERVICE_NAME
Sets the value of the
service.name
resource attribute.""
OTEL_EXPORTER_OTLP_ENDPOINT
Base endpoint URL for any signal type with an optionally specified port number.
https://localhost:4317
OTEL_EXPORTER_OTLP_CERTIFICATE
Path to the certificate file for the TLS credentials of the gRPC client.
https://localhost:4317
OTEL_TRACES_SAMPLER
Sampler to be used for traces.
parentbased_always_on
OTEL_EXPORTER_OTLP_PROTOCOL
Transport protocol for the OTLP exporter.
grpc
OTEL_EXPORTER_OTLP_TIMEOUT
Maximum time interval for the OTLP exporter to wait for each batch export.
10s
OTEL_EXPORTER_OTLP_INSECURE
Disables client transport security for gRPC requests. An HTTPS schema overrides it.
False
Chapter 6. Configuring metrics for the monitoring stack Copy linkLink copied to clipboard!
As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks:
-
Create a Prometheus
ServiceMonitor
CR for scraping the Collector’s pipeline metrics and the enabled Prometheus exporters. - Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack.
6.1. Configuration for sending metrics to the monitoring stack Copy linkLink copied to clipboard!
You can configure the OpenTelemetryCollector
custom resource (CR) to create a Prometheus ServiceMonitor
CR or a PodMonitor
CR for a sidecar deployment. A ServiceMonitor
can scrape Collector’s internal metrics endpoint and Prometheus exporter metrics endpoints.
Example of the OpenTelemetry Collector CR with the Prometheus exporter
- 1
- Configures the Red Hat build of OpenTelemetry Operator to create the Prometheus
ServiceMonitor
CR orPodMonitor
CR to scrape the Collector’s internal metrics endpoint and the Prometheus exporter metrics endpoints.
Setting enableMetrics
to true
creates the following two ServiceMonitor
instances:
-
One
ServiceMonitor
instance for the<instance_name>-collector-monitoring
service. ThisServiceMonitor
instance scrapes the Collector’s internal metrics. -
One
ServiceMonitor
instance for the<instance_name>-collector
service. ThisServiceMonitor
instance scrapes the metrics exposed by the Prometheus exporter instances.
Alternatively, a manually created Prometheus PodMonitor
CR can provide fine control, for example removing duplicated labels added during Prometheus scraping.
Example of the PodMonitor
CR that configures the monitoring stack to scrape the Collector metrics
6.2. Configuration for receiving metrics from the monitoring stack Copy linkLink copied to clipboard!
A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack.
Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack
- 1
- Assigns the
cluster-monitoring-view
cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data. - 2
- Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver.
- 3
- Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack.
- 4
- Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint.
- 5
- Configures the debug exporter to print the metrics to the standard output.
Chapter 7. Forwarding telemetry data Copy linkLink copied to clipboard!
You can use the OpenTelemetry Collector to forward your telemetry data.
7.1. Forwarding traces to a TempoStack instance Copy linkLink copied to clipboard!
To configure forwarding traces to a TempoStack instance, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources.
Prerequisites
- The Red Hat build of OpenTelemetry Operator is installed.
- The Tempo Operator is installed.
- A TempoStack instance is deployed on the cluster.
Procedure
Create a service account for the OpenTelemetry Collector.
Example ServiceAccount
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for the service account.
Example ClusterRole
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This example uses the Kubernetes Attributes Processor, which requires these permissions for the
pods
andnamespaces
resources. - 2
- Also due to the Kubernetes Attributes Processor, these permissions are required for the
replicasets
resources. - 3
- This example also uses the Resource Detection Processor, which requires these permissions for the
infrastructures
andstatus
resources.
Bind the cluster role to the service account.
Example ClusterRoleBinding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the YAML file to define the
OpenTelemetryCollector
custom resource (CR).Example OpenTelemetryCollector
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint,
"tempo-simplest-distributor:4317"
in this example, which is already created. - 2
- The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the gRPC protocol.
You can deploy telemetrygen
as a test:
7.2. Forwarding logs to a LokiStack instance Copy linkLink copied to clipboard!
You can deploy the OpenTelemetry Collector to forward logs to a LokiStack
instance by using the openshift-logging
tenants mode.
Prerequisites
- The Red Hat build of OpenTelemetry Operator is installed.
- The Loki Operator is installed.
-
A supported
LokiStack
instance is deployed on the cluster. For more information about the supportedLokiStack
configuration, see Logging.
Procedure
Create a service account for the OpenTelemetry Collector.
Example
ServiceAccount
objectapiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role that grants the Collector’s service account the permissions to push logs to the
LokiStack
application tenant.Example
ClusterRole
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the cluster role to the service account.
Example
ClusterRoleBinding
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
OpenTelemetryCollector
custom resource (CR) object.Example
OpenTelemetryCollector
CR objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Provides the following resource attributes to be used by the web console:
kubernetes.namespace_name
,kubernetes.pod_name
,kubernetes.container_name
, andlog_type
. - 2
- Enables the BearerTokenAuth Extension that is required by the OTLP HTTP Exporter.
- 3
- Enables the OTLP HTTP Exporter to export logs from the Collector.
You can deploy telemetrygen
as a test:
Chapter 8. Configuring the OpenTelemetry Collector metrics Copy linkLink copied to clipboard!
The following list shows some of these metrics:
- Collector memory usage
- CPU utilization
- Number of active traces and spans processed
- Dropped spans, logs, or metrics
- Exporter and receiver statistics
The Red Hat build of OpenTelemetry Operator automatically creates a service named <instance_name>-collector-monitoring
that exposes the Collector’s internal metrics. This service listens on port 8888
by default.
You can use these metrics for monitoring the Collector’s performance, resource consumption, and other internal behaviors. You can also use a Prometheus instance or another monitoring tool to scrape these metrics from the mentioned <instance_name>-collector-monitoring
service.
When the spec.observability.metrics.enableMetrics
field in the OpenTelemetryCollector
custom resource (CR) is set to true
, the OpenTelemetryCollector
CR automatically creates a Prometheus ServiceMonitor
or PodMonitor
CR to enable Prometheus to scrape your metrics.
Prerequisites
- Monitoring for user-defined projects is enabled in the cluster.
Procedure
To enable metrics of an OpenTelemetry Collector instance, set the
spec.observability.metrics.enableMetrics
field totrue
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can use the Administrator view of the web console to verify successful configuration:
- Go to Observe → Targets.
- Filter by Source: User.
-
Check that the ServiceMonitors or PodMonitors in the
opentelemetry-collector-<instance_name>
format have the Up status.
Additional resources
Chapter 9. Gathering the observability data from multiple clusters Copy linkLink copied to clipboard!
For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance.
Prerequisites
- The Red Hat build of OpenTelemetry Operator is installed.
- The Tempo Operator is installed.
- A TempoStack instance is deployed on the cluster.
- The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1.
Procedure
Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates.
An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A self-signed certificate.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A CA issuer.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The client and server certificates.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a service account for the OpenTelemetry Collector instance.
Example ServiceAccount
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for the service account.
Example ClusterRole
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the cluster role to the service account.
Example ClusterRoleBinding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the YAML file to define the
OpenTelemetryCollector
custom resource (CR) in the edge clusters.Example
OpenTelemetryCollector
custom resource for the edge clustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster.
Create the YAML file to define the
OpenTelemetryCollector
custom resource (CR) in the central cluster.Example
OpenTelemetryCollector
custom resource for the central clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Troubleshooting Copy linkLink copied to clipboard!
The OpenTelemetry Collector offers multiple ways to measure its health as well as investigate data ingestion issues.
10.1. Collecting diagnostic data from the command line Copy linkLink copied to clipboard!
When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather
tool to gather diagnostic data for resources of various types, such as OpenTelemetryCollector
, Instrumentation
, and the created resources like Deployment
, Pod
, or ConfigMap
. The oc adm must-gather
tool creates a new pod that collects this data.
Procedure
From the directory where you want to save the collected data, run the
oc adm must-gather
command to collect the data:oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace>
$ oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace where the Operator is installed is
openshift-opentelemetry-operator
.
Verification
- Verify that the new directory is created and contains the collected data.
10.2. Getting the OpenTelemetry Collector logs Copy linkLink copied to clipboard!
You can get the logs for the OpenTelemetry Collector as follows.
Procedure
Set the relevant log level in the
OpenTelemetryCollector
custom resource (CR):config: service: telemetry: logs: level: debug
config: service: telemetry: logs: level: debug
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Collector’s log level. Supported values include
info
,warn
,error
, ordebug
. Defaults toinfo
.
-
Use the
oc logs
command or the web console to retrieve the logs.
10.3. Exposing the metrics Copy linkLink copied to clipboard!
The OpenTelemetry Collector exposes the following metrics about the data volumes it has processed:
otelcol_receiver_accepted_spans
- The number of spans successfully pushed into the pipeline.
otelcol_receiver_refused_spans
- The number of spans that could not be pushed into the pipeline.
otelcol_exporter_sent_spans
- The number of spans successfully sent to the destination.
otelcol_exporter_enqueue_failed_spans
- The number of spans failed to be added to the sending queue.
otelcol_receiver_accepted_logs
- The number of logs successfully pushed into the pipeline.
otelcol_receiver_refused_logs
- The number of logs that could not be pushed into the pipeline.
otelcol_exporter_sent_logs
- The number of logs successfully sent to the destination.
otelcol_exporter_enqueue_failed_logs
- The number of logs failed to be added to the sending queue.
otelcol_receiver_accepted_metrics
- The number of metrics successfully pushed into the pipeline.
otelcol_receiver_refused_metrics
- The number of metrics that could not be pushed into the pipeline.
otelcol_exporter_sent_metrics
- The number of metrics successfully sent to the destination.
otelcol_exporter_enqueue_failed_metrics
- The number of metrics failed to be added to the sending queue.
You can use these metrics to troubleshoot issues with your Collector. For example, if the otelcol_receiver_refused_spans
metric has a high value, it indicates that the Collector is not able to process incoming spans.
The Operator creates a <cr_name>-collector-monitoring
telemetry service that you can use to scrape the metrics endpoint.
Procedure
Enable the telemetry service by adding the following lines in the
OpenTelemetryCollector
custom resource (CR):Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The port at which the internal collector metrics are exposed. Defaults to
:8888
.
Retrieve the metrics by running the following command, which uses the port-forwarding Collector pod:
oc port-forward <collector_pod>
$ oc port-forward <collector_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
OpenTelemetryCollector
CR, set theenableMetrics
field totrue
to scrape internal metrics:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the deployment mode of the OpenTelemetry Collector, the internal metrics are scraped by using
PodMonitors
orServiceMonitors
.NoteAlternatively, if you do not set the
enableMetrics
field totrue
, you can access the metrics endpoint athttp://localhost:8888/metrics
.Optional: If the User Workload Monitoring feature is enabled in the web console, go to Observe → Dashboards in the web console, and then select the OpenTelemetry Collector dashboard from the drop-down list to view it. For more information about the User Workload Monitoring feature, see "Enabling monitoring for user-defined projects" in Monitoring.
TipYou can filter the visualized data such as spans or metrics by the Collector instance, namespace, or OpenTelemetry components such as processors, receivers, or exporters.
10.4. Debug Exporter Copy linkLink copied to clipboard!
You can configure the Debug Exporter to export the collected data to the standard output.
Procedure
Configure the
OpenTelemetryCollector
custom resource as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use the
oc logs
command or the web console to export the logs to the standard output.
10.5. Using the Network Observability Operator for troubleshooting Copy linkLink copied to clipboard!
You can debug the traffic between your observability components by visualizing it with the Network Observability Operator.
Prerequisites
- You have installed the Network Observability Operator as explained in "Installing the Network Observability Operator".
Procedure
- In the OpenShift Container Platform web console, go to Observe → Network Traffic → Topology.
- Select Namespace to filter the workloads by the namespace in which your OpenTelemetry Collector is deployed.
- Use the network traffic visuals to troubleshoot possible issues. See "Observing the network traffic from the Topology view" for more details.
10.6. Troubleshooting the instrumentation Copy linkLink copied to clipboard!
To troubleshoot the instrumentation, look for any of the following issues:
- Issues with instrumentation injection into your workload
- Issues with data generation by the instrumentation libraries
10.6.1. Troubleshooting instrumentation injection into your workload Copy linkLink copied to clipboard!
To troubleshoot instrumentation injection, you can perform the following activities:
-
Checking if the
Instrumentation
object was created - Checking if the init-container started
- Checking if the resources were deployed in the correct order
- Searching for errors in the Operator logs
- Double-checking the pod annotations
Procedure
Run the following command to verify that the
Instrumentation
object was successfully created:oc get instrumentation -n <workload_project>
$ oc get instrumentation -n <workload_project>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where the instrumentation was created.
Run the following command to verify that the
opentelemetry-auto-instrumentation
init-container successfully started, which is a prerequisite for instrumentation injection into workloads:oc get events -n <workload_project>
$ oc get events -n <workload_project>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where the instrumentation is injected for workloads.
Example output
... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation
... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resources were deployed in the correct order for the auto-instrumentation to work correctly. The correct order is to deploy the
Instrumentation
custom resource (CR) before the application. For information about theInstrumentation
CR, see the section "Configuring the instrumentation".NoteWhen the pod starts, the Red Hat build of OpenTelemetry Operator checks the
Instrumentation
CR for annotations containing instructions for injecting auto-instrumentation. Generally, the Operator then adds an init-container to the application’s pod that injects the auto-instrumentation and environment variables into the application’s container. If theInstrumentation
CR is not available to the Operator when the application is deployed, the Operator is unable to inject the auto-instrumentation.Fixing the order of deployment requires the following steps:
- Update the instrumentation settings.
- Delete the instrumentation object.
- Redeploy the application.
Run the following command to inspect the Operator logs for instrumentation errors:
oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow
$ oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Troubleshoot pod annotations for the instrumentations for a specific programming language. See the required annotation fields and values in "Configuring the instrumentation".
Verify that the application pods that you are instrumenting are labeled with correct annotations and the appropriate auto-instrumentation settings have been applied.
Example
instrumentation.opentelemetry.io/inject-python="true"
instrumentation.opentelemetry.io/inject-python="true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command to get pod annotations for an instrumented Python application
oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations["instrumentation.opentelemetry.io/inject-python"]=="true")]}{.metadata.name}{"\n"}{end}'
$ oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations["instrumentation.opentelemetry.io/inject-python"]=="true")]}{.metadata.name}{"\n"}{end}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the annotation applied to the instrumentation object is correct for the programming language that you are instrumenting.
If there are multiple instrumentations in the same namespace, specify the name of the
Instrumentation
object in their annotations.Example
instrumentation.opentelemetry.io/inject-nodejs: "<instrumentation_object>"
instrumentation.opentelemetry.io/inject-nodejs: "<instrumentation_object>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
Instrumentation
object is in a different namespace, specify the namespace in the annotation.Example
instrumentation.opentelemetry.io/inject-nodejs: "<other_namespace>/<instrumentation_object>"
instrumentation.opentelemetry.io/inject-nodejs: "<other_namespace>/<instrumentation_object>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
OpenTelemetryCollector
custom resource specifies the auto-instrumentation annotations underspec.template.metadata.annotations
. If the auto-instrumentation annotations are inspec.metadata.annotations
instead, move them intospec.template.metadata.annotations
.
10.6.2. Troubleshooting telemetry data generation by the instrumentation libraries Copy linkLink copied to clipboard!
You can troubleshoot telemetry data generation by the instrumentation libraries by checking the endpoint, looking for errors in your application logs, and verifying that the Collector is receiving the telemetry data.
Procedure
Verify that the instrumentation is transmitting data to the correct endpoint:
oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'
$ oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default endpoint
http://localhost:4317
for theInstrumentation
object is only applicable to a Collector instance that is deployed as a sidecar in your application pod. If you are using an incorrect endpoint, correct it by editing theInstrumentation
object and redeploying your application.Inspect your application logs for error messages that might indicate that the instrumentation is malfunctioning:
oc logs <application_pod> -n <workload_project>
$ oc logs <application_pod> -n <workload_project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the application logs contain error messages that indicate that the instrumentation might be malfunctioning, install the OpenTelemetry SDK and libraries locally. Then run your application locally and troubleshoot for issues between the instrumentation libraries and your application without OpenShift Container Platform.
- Use the Debug Exporter to verify that the telemetry data is reaching the destination OpenTelemetry Collector instance. For more information, see "Debug Exporter".
Chapter 11. Migrating Copy linkLink copied to clipboard!
The deprecated Red Hat OpenShift Distributed Tracing Platform (Jaeger) 3.5 was the last release of the Red Hat OpenShift Distributed Tracing Platform (Jaeger) that Red Hat supports.
Support for the deprecated Red Hat OpenShift Distributed Tracing Platform (Jaeger) ends on November 3, 2025.
The Red Hat OpenShift Distributed Tracing Platform Operator (Jaeger) will be removed from the redhat-operators
catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift.
You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the Distributed Tracing Platform documentation.
If you are already using the Red Hat OpenShift Distributed Tracing Platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project.
The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications.
Migration from the Distributed Tracing Platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments.
11.1. Migrating with sidecars Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a Distributed Tracing Platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar.
Prerequisites
- The Red Hat OpenShift Distributed Tracing Platform (Jaeger) is used on the cluster.
- The Red Hat build of OpenTelemetry is installed.
Procedure
Configure the OpenTelemetry Collector as a sidecar.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This endpoint points to the Gateway of a TempoStack instance deployed by using the
<example>
Tempo Operator.
Create a service account for running your application.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for the permissions needed by some processors.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
resourcedetectionprocessor
requires permissions for infrastructures and infrastructures/status.
Create a
ClusterRoleBinding
to set the permissions for the service account.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy the OpenTelemetry Collector as a sidecar.
-
Remove the injected Jaeger Agent from your application by removing the
"sidecar.jaegertracing.io/inject": "true"
annotation from yourDeployment
object. -
Enable automatic injection of the OpenTelemetry sidecar by adding the
sidecar.opentelemetry.io/inject: "true"
annotation to the.spec.template.metadata.annotations
field of yourDeployment
object. - Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces.
11.2. Migrating without sidecars Copy linkLink copied to clipboard!
You can migrate from the Distributed Tracing Platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment.
Prerequisites
- The Red Hat OpenShift Distributed Tracing Platform (Jaeger) is used on the cluster.
- The Red Hat build of OpenTelemetry is installed.
Procedure
- Configure OpenTelemetry Collector deployment.
Create the project where the OpenTelemetry Collector will be deployed.
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account for running the OpenTelemetry Collector instance.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for setting the required permissions for the processors.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a ClusterRoleBinding to set the permissions for the service account.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OpenTelemetry Collector instance.
NoteThis collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Point your tracing endpoint to the OpenTelemetry Operator.
If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint.
Example of exporting traces by using the
jaegerexporter
with Golangexp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url)))
exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url)))
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URL points to the OpenTelemetry Collector API endpoint.
Chapter 12. Upgrading Copy linkLink copied to clipboard!
For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.
The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators.
The Red Hat build of OpenTelemetry Operator automatically upgrades all OpenTelemetryCollector
custom resources during its startup. The Operator reconciles all managed instances during its startup. If there is an error, the Operator retries the upgrade at exponential backoff. If an upgrade fails, the Operator will retry the upgrade again when it restarts.
When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator’s new version.
Chapter 13. Removing Copy linkLink copied to clipboard!
The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows:
- Shut down all Red Hat build of OpenTelemetry pods.
- Remove any OpenTelemetryCollector instances.
- Remove the Red Hat build of OpenTelemetry Operator.
13.1. Removing an OpenTelemetry Collector instance by using the web console Copy linkLink copied to clipboard!
You can remove an OpenTelemetry Collector instance in the Administrator view of the web console.
Prerequisites
-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
- Go to Operators → Installed Operators → Red Hat build of OpenTelemetry Operator → OpenTelemetryInstrumentation or OpenTelemetryCollector.
-
To remove the relevant instance, select
→ Delete … → Delete.
- Optional: Remove the Red Hat build of OpenTelemetry Operator.
13.2. Removing an OpenTelemetry Collector instance by using the CLI Copy linkLink copied to clipboard!
You can remove an OpenTelemetry Collector instance on the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Ensure that your OpenShift CLI (
Procedure
Get the name of the OpenTelemetry Collector instance by running the following command:
oc get deployments -n <project_of_opentelemetry_instance>
$ oc get deployments -n <project_of_opentelemetry_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OpenTelemetry Collector instance by running the following command:
oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>
$ oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Remove the Red Hat build of OpenTelemetry Operator.
Verification
To verify successful removal of the OpenTelemetry Collector instance, run
oc get deployments
again:oc get deployments -n <project_of_opentelemetry_instance>
$ oc get deployments -n <project_of_opentelemetry_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.