This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Red Hat build of OpenTelemetry
Red Hat build of OpenTelemetry
Abstract
Chapter 1. Release notes for Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
1.1. Red Hat build of OpenTelemetry overview Copia collegamentoCollegamento copiato negli appunti!
Red Hat build of OpenTelemetry is based on the open source OpenTelemetry project, which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat build of OpenTelemetry product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.
The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.
The OpenTelemetry Collector has a number of features including the following:
- Data Collection and Processing Hub
- It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure.
- Customizable telemetry data pipeline
- The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers.
- Auto-instrumentation features
- Automatic instrumentation simplifies the process of adding observability to applications. Developers don’t need to manually instrument their code for basic telemetry data.
Here are some of the use cases for the OpenTelemetry Collector:
- Centralized data collection
- In a microservices architecture, the Collector can be deployed to aggregate data from multiple services.
- Data enrichment and processing
- Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data.
- Multi-backend receiving and exporting
- The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously.
1.2. Red Hat build of OpenTelemetry 3.0 Copia collegamentoCollegamento copiato negli appunti!
Red Hat build of OpenTelemetry 3.0 is based on OpenTelemetry 0.89.0.
1.2.1. New features and enhancements Copia collegamentoCollegamento copiato negli appunti!
This update introduces the following enhancements:
- The OpenShift distributed tracing data collection Operator is renamed as the Red Hat build of OpenTelemetry Operator.
- Support for the ARM architecture.
- Support for the Prometheus receiver for metrics collection.
- Support for the Kafka receiver and exporter for sending traces and metrics to Kafka.
- Support for cluster-wide proxy environments.
-
The Red Hat build of OpenTelemetry Operator creates the Prometheus
ServiceMonitor
custom resource if the Prometheus exporter is enabled. -
The Operator enables the
Instrumentation
custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries.
1.2.2. Removal notice Copia collegamentoCollegamento copiato negli appunti!
- In Red Hat build of OpenTelemetry 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead.
1.2.3. Bug fixes Copia collegamentoCollegamento copiato negli appunti!
This update introduces the following bug fixes:
-
Fixed support for disconnected environments when using the
oc adm catalog mirror
CLI command.
1.2.4. Known issues Copia collegamentoCollegamento copiato negli appunti!
Curently, the cluster monitoring of the Red Hat build of OpenTelemetry Operator is disabled due to a bug (TRACING-3761). The bug is preventing the cluster monitoring from scraping metrics from the Red Hat build of OpenTelemetry Operator due to a missing label openshift.io/cluster-monitoring=true
that is required for the cluster monitoring and service monitor object.
Workaround
You can enable the cluster monitoring as follows:
-
Add the following label in the Operator namespace:
oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true
Create a service monitor, role, and role binding:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Getting support Copia collegamentoCollegamento copiato negli appunti!
If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal. From the Customer Portal, you can:
- Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
- Submit a support case to Red Hat Support.
- Access other product documentation.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console. Insights provides details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.
1.4. Making open source more inclusive Copia collegamentoCollegamento copiato negli appunti!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 2. Installing the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
Installing the Red Hat build of OpenTelemetry involves the following steps:
- Installing the Red Hat build of OpenTelemetry Operator.
- Creating a namespace for an OpenTelemetry Collector instance.
-
Creating an
OpenTelemetryCollector
custom resource to deploy the OpenTelemetry Collector instance.
2.1. Installing the Red Hat build of OpenTelemetry from the web console Copia collegamentoCollegamento copiato negli appunti!
You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console.
Prerequisites
-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
Install the Red Hat build of OpenTelemetry Operator:
-
Go to Operators → OperatorHub and search for
Red Hat build of OpenTelemetry Operator
. Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat → Install → Install → View Operator.
ImportantThis installs the Operator with the default presets:
- Update channel → stable
- Installation mode → All namespaces on the cluster
- Installed Namespace → openshift-operators
- Update approval → Automatic
- In the Details tab of the installed Operator page, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
-
Go to Operators → OperatorHub and search for
- Create a project of your choice for the OpenTelemetry Collector instance that you will create in the next step by going to Home → Projects → Create Project.
Create an OpenTelemetry Collector instance.
- Go to Operators → Installed Operators.
- Select OpenTelemetry Collector → Create OpenTelemetry Collector → YAML view.
In the YAML view, customize the
OpenTelemetryCollector
custom resource (CR) with the OTLP, Jaeger, Zipkin receivers and the debug exporter.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Select Create.
Verification
- Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance.
- Go to Operators → Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready.
- Go to Workloads → Pods to verify that all the component pods of the OpenTelemetry Collector instance are running.
2.2. Installing the Red Hat build of OpenTelemetry by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can install the Red Hat build of OpenTelemetry from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Ensure that your OpenShift CLI (
Procedure
Install the Red Hat build of OpenTelemetry Operator:
Create a project for the Red Hat build of OpenTelemetry Operator by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Operator group by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subscription by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Operator status by running the following command:
oc get csv -n openshift-opentelemetry-operator
$ oc get csv -n openshift-opentelemetry-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step:
To create a project without metadata, run the following command:
oc new-project <project_of_opentelemetry_collector_instance>
$ oc new-project <project_of_opentelemetry_collector_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a project with metadata, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an OpenTelemetry Collector instance in the project that you created for it.
NoteYou can create multiple OpenTelemetry Collector instances in separate projects on the same cluster.
Customize the
OpenTelemetry Collector
custom resource (CR) with the OTLP, Jaeger, and Zipkin receivers and the debug exporter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the customized CR by running the following command:
oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF
$ oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
status.phase
of the OpenTelemetry Collector pod isRunning
and theconditions
aretype: Ready
by running the following command:oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the OpenTelemetry Collector service by running the following command:
oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Configuring and deploying the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file.
3.1. OpenTelemetry Collector configuration options Copia collegamentoCollegamento copiato negli appunti!
The OpenTelemetry Collector consists of five types of components that access telemetry data:
- Receivers
- A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources.
- Processors
- Optional. Processors process the data between it is received and exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.
- Exporters
- An exporter, which can be push or pull based, is how you send data to one or more back ends or destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.
- Connectors
- A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.
- Extensions
- An extension adds capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically.
You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service
section of the YAML file. As a best practice, only enable the components that you need.
Example of the OpenTelemetry Collector custom resource file
- 1
- If a component is configured but not defined in the
service
section, the component is not enabled.
Parameter | Description | Values | Default |
---|---|---|---|
receivers:
| A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. |
| None |
processors:
| Processors run through the data between it is received and exported. By default, no processors are enabled. |
| None |
exporters:
| An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. |
| None |
connectors:
| Connectors join pairs of pipelines, that is by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers, and can be used to summarize, replicate, or route consumed data. |
| None |
extensions:
| Optional components for tasks that do not involve processing telemetry data. |
| None |
service: pipelines:
|
Components are enabled by adding them to a pipeline under | ||
service: pipelines: traces: receivers:
|
You enable receivers for tracing by adding them under | None | |
service: pipelines: traces: processors:
|
You enable processors for tracing by adding them under | None | |
service: pipelines: traces: exporters:
|
You enable exporters for tracing by adding them under | None | |
service: pipelines: metrics: receivers:
|
You enable receivers for metrics by adding them under | None | |
service: pipelines: metrics: processors:
|
You enable processors for metircs by adding them under | None | |
service: pipelines: metrics: exporters:
|
You enable exporters for metrics by adding them under | None |
3.1.1. OpenTelemetry Collector components Copia collegamentoCollegamento copiato negli appunti!
3.1.1.1. Receivers Copia collegamentoCollegamento copiato negli appunti!
Receivers get data into the Collector.
3.1.1.1.1. OTLP Receiver Copia collegamentoCollegamento copiato negli appunti!
The OTLP receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with an enabled OTLP receiver
- 1
- The OTLP gRPC endpoint. If omitted, the default
0.0.0.0:4317
is used. - 2
- The server-side TLS configuration. Defines paths to TLS certificates. If omitted, TLS is disabled.
- 3
- The path to the TLS certificate at which the server verifies a client certificate. This sets the value of
ClientCAs
andClientAuth
toRequireAndVerifyClientCert
in theTLSConfig
. For more information, see theConfig
of the Golang TLS package. - 4
- Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_interval
accepts a string containing valid units of time such asns
,us
(orµs
),ms
,s
,m
,h
. - 5
- The OTLP HTTP endpoint. The default value is
0.0.0.0:4318
. - 6
- The server-side TLS configuration. For more information, see the
grpc
protocol configuration section.
3.1.1.1.2. Jaeger Receiver Copia collegamentoCollegamento copiato negli appunti!
The Jaeger receiver ingests traces in the Jaeger formats.
OpenTelemetry Collector custom resource with an enabled Jaeger receiver
- 1
- The Jaeger gRPC endpoint. If omitted, the default
0.0.0.0:14250
is used. - 2
- The Jaeger Thrift HTTP endpoint. If omitted, the default
0.0.0.0:14268
is used. - 3
- The Jaeger Thrift Compact endpoint. If omitted, the default
0.0.0.0:6831
is used. - 4
- The Jaeger Thrift Binary endpoint. If omitted, the default
0.0.0.0:6832
is used. - 5
- The server-side TLS configuration. See the OTLP receiver configuration section for more details.
3.1.1.1.3. Prometheus Receiver Copia collegamentoCollegamento copiato negli appunti!
The Prometheus receiver is currently a Technology Preview feature only.
The Prometheus receiver scrapes the metrics endpoints.
OpenTelemetry Collector custom resource with an enabled Prometheus receiver
- 1
- Scrapes configurations using the Prometheus format.
- 2
- The Prometheus job name.
- 3
- The lnterval for scraping the metrics data. Accepts time units. The default value is
1m
. - 4
- The targets at which the metrics are exposed. This example scrapes the metrics from a
my-app
application in theexample
project.
3.1.1.1.4. Zipkin Receiver Copia collegamentoCollegamento copiato negli appunti!
The Zipkin receiver ingests traces in the Zipkin v1 and v2 formats.
OpenTelemetry Collector custom resource with the enabled Zipkin receiver
3.1.1.1.5. Kafka Receiver Copia collegamentoCollegamento copiato negli appunti!
The Kafka receiver is currently a Technology Preview feature only.
The Kafka receiver receives traces, metrics, and logs from Kafka in the OTLP format.
OpenTelemetry Collector custom resource with the enabled Kafka receiver
- 1
- The list of Kafka brokers. The default is
localhost:9092
. - 2
- The Kafka protocol version. For example,
2.0.0
. This is a required field. - 3
- The name of the Kafka topic to read from. The default is
otlp_spans
. - 4
- The plaintext authentication configuration. If omitted, plaintext authentication is disabled.
- 5
- The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
- 6
- Disables verifying the server’s certificate chain and host name. The default is
false
. - 7
- ServerName indicates the name of the server requested by the client to support virtual hosting.
3.1.1.1.6. OpenCensus receiver Copia collegamentoCollegamento copiato negli appunti!
The OpenCensus receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json.
OpenTelemetry Collector custom resource with the enabled OpenCensus receiver
- 1
- The OpenCensus endpoint. If omitted, the default is
0.0.0.0:55678
. - 2
- The server-side TLS configuration. See the OTLP receiver configuration section for more details.
- 3
- You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with
*
are accepted under thecors_allowed_origins
. To match any origin, enter only*
.
3.1.1.2. Processors Copia collegamentoCollegamento copiato negli appunti!
Processors run through the data between it is received and exported.
3.1.1.2.1. Batch processor Copia collegamentoCollegamento copiato negli appunti!
The Batch processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information.
Example of the OpenTelemetry Collector custom resource when using the Batch processor
Parameter | Description | Default |
---|---|---|
timeout
| Sends the batch after a specific time duration and irrespective of the batch size. |
|
send_batch_size
| Sends the batch of telemetry data after the specified number of spans or metrics. |
|
send_batch_max_size
|
The maximum allowable size of the batch. Must be equal or greater than the |
|
metadata_keys
|
When activated, a batcher instance is created for each unique set of values found in the |
|
metadata_cardinality_limit
|
When the |
|
3.1.1.2.2. Memory Limiter processor Copia collegamentoCollegamento copiato negli appunti!
The Memory Limiter processor periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter processor forces garbage collection to run.
Example of the OpenTelemetry Collector custom resource when using the Memory Limiter processor
Parameter | Description | Default |
---|---|---|
check_interval
|
Time between memory usage measurements. The optimal value is |
|
limit_mib
| The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. |
|
spike_limit_mib
|
Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of |
20% of |
limit_percentage
|
Same as the |
|
spike_limit_percentage
|
Same as the |
|
3.1.1.2.3. Resource Detection processor Copia collegamentoCollegamento copiato negli appunti!
The Resource Detection processor is currently a Technology Preview feature only.
The Resource Detection processor identifies host resource details in alignment with OpenTelemetry’s resource semantic standards. Using the detected information, it can add or replace the resource values in telemetry data. This processor supports traces, metrics, and can be used with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES
environment variable detector.
OpenShift Container Platform permissions required for the Resource Detection processor
OpenTelemetry Collector using the Resource Detection processor
OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector
- 1
- Specifies which detector to use. In this example, the environment detector is specified.
3.1.1.2.4. Attributes processor Copia collegamentoCollegamento copiato negli appunti!
The Attributes processor is currently a Technology Preview feature only.
The Attributes processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions.
The processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported:
- Insert
- Inserts a new attribute into the input data when the specified key does not already exist.
- Update
- Updates an attribute in the input data if the key already exists.
- Upsert
- Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists.
- Delete
- Removes an attribute from the input data.
- Hash
- Hashes an existing attribute value as SHA1.
- Extract
-
Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it will be overridden similarly to the Span processor’s
to_attributes
setting with the existing attribute as the source. - Convert
- Converts an existing attribute to a specified type.
OpenTelemetry Collector using the Attributes processor
3.1.1.2.5. Resource processor Copia collegamentoCollegamento copiato negli appunti!
The Resource processor is currently a Technology Preview feature only.
The Resource processor applies changes to the resource attributes. This processor supports traces, metrics, and logs.
OpenTelemetry Collector using the Resource Detection processor
Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute.
3.1.1.2.6. Span processor Copia collegamentoCollegamento copiato negli appunti!
The Span processor is currently a Technology Preview feature only.
The Span processor modifies the span name based on its attributes or extracts the span attributes from the span name. It can also change the span status. It can also include or exclude spans. This processor supports traces.
Span renaming requires specifying attributes for the new name by using the from_attributes
configuration.
OpenTelemetry Collector using the Span processor for renaming a span
You can use the processor to extract attributes from the span name.
OpenTelemetry Collector using the Span processor for extracting attributes from a span name
- 1
- This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a
documentID
attibute is created. In this example, if the input span name is/api/v1/document/12345678/update
, this results in the/api/v1/document/{documentId}/update
output span name, and a new"documentId"="12345678"
attribute is added to the span.
You can have the span status modified.
OpenTelemetry Collector using the Span Processor for status change
3.1.1.2.7. Kubernetes Attributes processor Copia collegamentoCollegamento copiato negli appunti!
The Kubernetes Attributes processor is currently a Technology Preview feature only.
The Kubernetes Attributes processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata.
Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes processor
OpenTelemetry Collector using the Kubernetes Attributes processor
config: | processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME
config: |
processors:
k8sattributes:
filter:
node_from_env_var: KUBE_NODE_NAME
3.1.1.3. Filter processor Copia collegamentoCollegamento copiato negli appunti!
The Filter processor is currently a Technology Preview feature only.
The Filter processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. The conditions can be combined by using the logical OR operator. This processor supports traces, metrics, and logs.
OpenTelemetry Collector custom resource with an enabled OTLP exporter
- 1
- Defines the error mode. When set to
ignore
, ignores errors returned by conditions. When set topropagate
, returns the error up the pipeline. An error causes the payload to be dropped from the Collector. - 2
- Filters the spans that have the
container.name == app_container_1
attribute. - 3
- Filters the spans that have the
host.name == localhost
resource attribute.
3.1.1.4. Routing processor Copia collegamentoCollegamento copiato negli appunti!
The Routing processor is currently a Technology Preview feature only.
The Routing processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming HTTP request (gRPC or plain HTTP) or can read a resource attribute, and then directs the trace information to relevant exporters according to the read value.
OpenTelemetry Collector custom resource with an enabled OTLP exporter
You can optionally create an attribute_source
configuratiion, which defines where to look for the attribute in from_attribute
. The allowed value is context
to search the context, which includes the HTTP headers, or resource
to search the resource attributes.
3.1.1.5. Exporters Copia collegamentoCollegamento copiato negli appunti!
Exporters send data to one or more back ends or destinations.
3.1.1.5.1. OTLP exporter Copia collegamentoCollegamento copiato negli appunti!
The OTLP gRPC exporter exports traces and metrics using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with an enabled OTLP exporter
- 1
- The OTLP gRPC endpoint. If the
https://
scheme is used, then client transport security is enabled and overrides theinsecure
setting in thetls
. - 2
- The client-side TLS configuration. Defines paths to TLS certificates.
- 3
- Disables client transport security when set to
true
. The default value isfalse
by default. - 4
- Skips verifying the certificate when set to
true
. The default value isfalse
. - 5
- Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_interval
accepts a string containing valid units of time such asns
,us
(orµs
),ms
,s
,m
,h
. - 6
- Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing.
- 7
- Headers are sent for every request performed during an established connection.
3.1.1.5.2. OTLP HTTP exporter Copia collegamentoCollegamento copiato negli appunti!
The OTLP HTTP exporter exports traces and metrics using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with an enabled OTLP exporter
- 1
- The OTLP HTTP endpoint. If the
https://
scheme is used, then client transport security is enabled and overrides theinsecure
setting in thetls
. - 2
- The client side TLS configuration. Defines paths to TLS certificates.
- 3
- Headers are sent in every HTTP request.
- 4
- If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request.
3.1.1.5.3. Debug exporter Copia collegamentoCollegamento copiato negli appunti!
The Debug exporter prints traces and metrics to the standard output.
OpenTelemetry Collector custom resource with an enabled Debug exporter
- 1
- Verbosity of the debug export:
detailed
ornormal
orbasic
. When set todetailed
, pipeline data is verbosely logged. Defaults tonormal
.
3.1.1.5.4. Prometheus exporter Copia collegamentoCollegamento copiato negli appunti!
The Prometheus exporter is currently a Technology Preview feature only.
The Prometheus exporter exports metrics in the Prometheus or OpenMetrics formats.
OpenTelemetry Collector custom resource with an enabled Prometheus exporter
- 1
- Exposes the Prometheus port from the Collector pod and service. You can enable scraping of metrics by Prometheus by using the port name in
ServiceMonitor
orPodMonitor
custom resource. - 2
- The network endpoint where the metrics are exposed.
- 3
- The server-side TLS configuration. Defines paths to TLS certificates.
- 4
- If set, exports metrics under the provided value. No default.
- 5
- Key-value pair labels that are applied for every exported metric. No default.
- 6
- If
true
, metrics are exported using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such ascounter
. Disabled by default. - 7
- If
enabled
istrue
, all the resource attributes are converted to metric labels by default. Disabled by default. - 8
- Defines how long metrics are exposed without updates. The default is
5m
. - 9
- Adds the metrics types and units suffixes. Must be disabled if the monitor tab in Jaeger console is enabled. The default is
true
.
3.1.1.5.5. Kafka exporter Copia collegamentoCollegamento copiato negli appunti!
The Kafka exporter is currently a Technology Preview feature only.
The Kafka exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. It must be used with batch and queued retry processors for higher throughput and resiliency.
OpenTelemetry Collector custom resource with an enabled Kafka exporter
- 1
- The list of Kafka brokers. The default is
localhost:9092
. - 2
- The Kafka protocol version. For example,
2.0.0
. This is a required field. - 3
- The name of the Kafka topic to read from. The following are the defaults:
otlp_spans
for traces,otlp_metrics
for metrics,otlp_logs
for logs. - 4
- The plaintext authentication configuration. If omitted, plaintext authentication is disabled.
- 5
- The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
- 6
- Disables verifying the server’s certificate chain and host name. The default is
false
. - 7
- ServerName indicates the name of the server requested by the client to support virtual hosting.
3.1.1.6. Connectors Copia collegamentoCollegamento copiato negli appunti!
Connectors connect two pipelines.
3.1.1.6.1. Spanmetrics connector Copia collegamentoCollegamento copiato negli appunti!
The Spanmetrics connector is currently a Technology Preview feature only.
The Spanmetrics connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.
OpenTelemetry Collector custom resource with an enabled spanmetrics connector
- 1
- Defines the flush interval of the generated metrics. Defaults to
15s
.
3.1.1.7. Extensions Copia collegamentoCollegamento copiato negli appunti!
Extensions add capabilities to the Collector.
3.1.1.7.1. BearerTokenAuth extension Copia collegamentoCollegamento copiato negli appunti!
The BearerTokenAuth extension is currently a Technology Preview feature only.
The BearerTokenAuth extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth extension on the receiver and exporter side. This extension supports traces, metrics, and logs.
OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth extension
- 1
- You can configure the BearerTokenAuth extension to send a custom
scheme
. The default isBearer
. - 2
- You can add the BearerTokenAuth extension token as metadata to identify a message.
- 3
- Path to a file that contains an authorization token that is transmitted with every message.
- 4
- You can assign the authenticator configuration to an OTLP receiver.
- 5
- You can assign the authenticator configuration to an OTLP exporter.
3.1.1.7.2. OAuth2Client extension Copia collegamentoCollegamento copiato negli appunti!
The OAuth2Client extension is currently a Technology Preview feature only.
The OAuth2Client extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs.
OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client extension
- 1
- Client identifier, which is provided by the identity provider.
- 2
- Confidential key used to authenticate the client to the identity provider.
- 3
- Further metadata, in the key-value pair format, which is transferred during authentication. For example,
audience
specifies the intended audience for the access token, indicating the recipient of the token. - 4
- The URL of the OAuth2 token endpoint, where the Collector requests access tokens.
- 5
- The scopes define the specific permissions or access levels requested by the client.
- 6
- The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens.
- 7
- When set to
true
, configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. - 8
- The path to a Certificate Authority (CA) file that is used to verify the server’s certificate during the TLS handshake.
- 9
- The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required.
- 10
- The path to the client’s private key file that is used with the client certificate if needed for authentication.
- 11
- Sets a timeout for the token client’s request.
- 12
- You can assign the authenticator configuration to an OTLP exporter.
3.1.1.7.3. Jaeger Remote Sampling extension Copia collegamentoCollegamento copiato negli appunti!
The Jaeger Remote Sampling extension is currently a Technology Preview feature only.
The Jaeger Remote Sampling extension allows serving sampling strategies after Jaeger’s remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system.
OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling extension
Example of a Jaeger Remote Sampling strategy file
3.1.1.7.4. Performance Profiler extension Copia collegamentoCollegamento copiato negli appunti!
The Performance Profiler extension is currently a Technology Preview feature only.
The Performance Profiler extension enables the Go net/http/pprof
endpoint. This is typically used by developers to collect performance profiles and investigate issues with the service.
OpenTelemetry Collector custom resource with the configured Performance Profiler extension
- 1
- The endpoint at which this extension listens. Use
localhost:
to make it available only locally or":"
to make it available on all network interfaces. The default value islocalhost:1777
. - 2
- Sets a fraction of blocking events to be profiled. To disable profiling, set this to
0
or a negative integer. See the documentation for theruntime
package. The default value is0
. - 3
- Set a fraction of mutex contention events to be profiled. To disable profiling, set this to
0
or a negative integer. See the documentation for theruntime
package. The default value is0
. - 4
- The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated.
3.1.1.7.5. Health Check extension Copia collegamentoCollegamento copiato negli appunti!
The Health Check extension is currently a Technology Preview feature only.
The Health Check extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift.
OpenTelemetry Collector custom resource with the configured Health Check extension
- 1
- The target IP address for publishing the health check status. The default is
0.0.0.0:13133
. - 2
- The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
- 3
- The path for the health check server. The default is
/
. - 4
- Settings for the Collector pipeline health check.
- 5
- Enables the Collector pipeline health check. The default is
false
. - 6
- The time interval for checking the number of failures. The default is
5m
. - 7
- The threshold of a number of failures until which a container is still marked as healthy. The default is
5
.
3.1.1.7.6. Memory Ballast extension Copia collegamentoCollegamento copiato negli appunti!
The Memory Ballast extension is currently a Technology Preview feature only.
The Memory Ballast extension enables applications to configure memory ballast for the process.
OpenTelemetry Collector custom resource with the configured Memory Ballast extension
3.1.1.7.7. zPages extension Copia collegamentoCollegamento copiato negli appunti!
The zPages extension is currently a Technology Preview feature only.
The zPages extension provides an HTTP endpoint for extensions that serve zPages. At the endpoint, this extension serves live data for debugging instrumented components. All core exporters and receivers provide some zPages instrumentation.
zPages are useful for in-process diagnostics without having to depend on a back end to examine traces or metrics.
OpenTelemetry Collector custom resource with the configured zPages extension
- 1
- Specifies the HTTP endpoint that serves zPages. Use
localhost:
to make it available only locally, or":"
to make it available on all network interfaces. The default islocalhost:55679
.
3.2. Gathering the observability data from different clusters with the OpenTelemetry Collector Copia collegamentoCollegamento copiato negli appunti!
For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance.
Prerequisites
- The Red Hat build of OpenTelemetry Operator is installed.
- The Tempo Operator is installed.
- A TempoStack instance is deployed on the cluster.
- The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1.
Procedure
Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates.
An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A self-signed certificate.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A CA issuer.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The client and server certificates.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a service account for the OpenTelemetry Collector instance.
Example ServiceAccount
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for the service account.
Example ClusterRole
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the cluster role to the service account.
Example ClusterRoleBinding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the YAML file to define the
OpenTelemetryCollector
custom resource (CR) in the edge clusters.Example
OpenTelemetryCollector
custom resource for the edge clustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster.
Create the YAML file to define the
OpenTelemetryCollector
custom resource (CR) in the central cluster.Example
OpenTelemetryCollector
custom resource for the central clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Configuration for sending metrics to the monitoring stack Copia collegamentoCollegamento copiato negli appunti!
The OpenTelemetry Collector custom resource (CR) can be configured to create a Prometheus ServiceMonitor
CR for scraping the Collector’s pipeline metrics and the enabled Prometheus exporters.
Example of the OpenTelemetry Collector custom resource with the Prometheus exporter
- 1
- Configures the operator to create the Prometheus
ServiceMonitor
CR to scrape the collector’s internal metrics endpoint and Prometheus exporter metric endpoints. The metrics will be stored in the OpenShift monitoring stack.
Alternatively, a manually created Prometheus PodMonitor
can provide fine control, for example removing duplicated labels added during Prometheus scraping.
Example of the PodMonitor
custom resource that configures the monitoring stack to scrape the Collector metrics
3.4. Setting up monitoring for the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
The Red Hat build of OpenTelemetry Operator supports monitoring and alerting of each OpenTelemtry Collector instance and exposes upgrade and operational metrics about the Operator itself.
3.4.1. Configuring the OpenTelemetry Collector metrics Copia collegamentoCollegamento copiato negli appunti!
You can enable metrics and alerts of OpenTelemetry Collector instances.
Prerequisites
- Monitoring for user-defined projects is enabled in the cluster.
Procedure
To enable metrics of a OpenTelemetry Collector instance, set the
spec.observability.metrics.enableMetrics
field totrue
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can use the Administrator view of the web console to verify successful configuration:
-
Go to Observe → Targets, filter by Source: User, and check that the ServiceMonitors in the
opentelemetry-collector-<instance_name>
format have the Up status.
Chapter 4. Configuring and deploying the OpenTelemetry instrumentation injection Copia collegamentoCollegamento copiato negli appunti!
OpenTelemetry instrumentation injection is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the configuration of the instrumentation.
4.1. OpenTelemetry instrumentation configuration options Copia collegamentoCollegamento copiato negli appunti!
The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server (httpd
).
Auto-instrumentation in OpenTelemetry refers to the capability where the framework automatically instruments an application without manual code changes. This enables developers and administrators to get observability into their applications with minimal effort and changes to the existing codebase.
The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images.
4.1.1. Instrumentation options Copia collegamentoCollegamento copiato negli appunti!
Instrumentation options are specified in the OpenTelemetryCollector
custom resource.
Sample OpenTelemetryCollector
custom resource file
Parameter | Description | Values |
---|---|---|
env
| Common environment variables to define across all the instrumentations. | |
exporter
| Exporter configuration. | |
propagators
| Propagators defines inter-process context propagation configuration. |
|
resource
| Resource attributes configuration. | |
sampler
| Sampling configuration. | |
apacheHttpd
| Configuration for the Apache HTTP Server instrumentation. | |
dotnet
| Configuration for the .NET instrumentation. | |
go
| Configuration for the Go instrumentation. | |
java
| Configuration for the Java instrumentation. | |
nodejs
| Configuration for the Node.js instrumentation. | |
python
| Configuration for the Python instrumentation. |
4.1.2. Using the instrumentation CR with Service Mesh Copia collegamentoCollegamento copiato negli appunti!
When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi
propagator.
4.1.2.1. Configuration of the Apache HTTP Server auto-instrumentation Copia collegamentoCollegamento copiato negli appunti!
Name | Description | Default |
---|---|---|
attrs
| Attributes specific to the Apache HTTP Server. | |
configPath
| Location of the Apache HTTP Server configuration. | /usr/local/apache2/conf |
env
| Environment variables specific to the Apache HTTP Server. | |
image
| Container image with the Apache SDK and auto-instrumentation. | |
resourceRequirements
| The compute resource requirements. | |
version
| Apache HTTP Server version. | 2.4 |
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
4.1.2.2. Configuration of the .NET auto-instrumentation Copia collegamentoCollegamento copiato negli appunti!
Name | Description |
---|---|
env
| Environment variables specific to .NET. |
image
| Container image with the .NET SDK and auto-instrumentation. |
resourceRequirements
| The compute resource requirements. |
For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT
environment variable must be set if the endpoint of the exporters is set to 4317
. The .NET autoinstrumentation uses http/proto
by default, and the telemetry data must be set to the 4318
port.
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-dotnet: "true"
instrumentation.opentelemetry.io/inject-dotnet: "true"
4.1.2.3. Configuration of the Go auto-instrumentation Copia collegamentoCollegamento copiato negli appunti!
Name | Description |
---|---|
env
| Environment variables specific to Go. |
image
| Container image with the Go SDK and auto-instrumentation. |
resourceRequirements
| The compute resource requirements. |
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-go: "true"
instrumentation.opentelemetry.io/inject-go: "true"
Additional permissions required for the Go auto-instrumentation in the OpenShift cluster
The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows:
oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>
$ oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>
4.1.2.4. Configuration of the Java auto-instrumentation Copia collegamentoCollegamento copiato negli appunti!
Name | Description |
---|---|
env
| Environment variables specific to Java. |
image
| Container image with the Java SDK and auto-instrumentation. |
resourceRequirements
| The compute resource requirements. |
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/inject-java: "true"
4.1.2.5. Configuration of the Node.js auto-instrumentation Copia collegamentoCollegamento copiato negli appunti!
Name | Description |
---|---|
env
| Environment variables specific to Node.js. |
image
| Container image with the Node.js SDK and auto-instrumentation. |
resourceRequirements
| The compute resource requirements. |
The PodSpec
annotations to enable injection
instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable"
instrumentation.opentelemetry.io/inject-nodejs: "true"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable"
The instrumentation.opentelemetry.io/otel-go-auto-target-exe
annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE
environment variable.
4.1.2.6. Configuration of the Python auto-instrumentation Copia collegamentoCollegamento copiato negli appunti!
Name | Description |
---|---|
env
| Environment variables specific to Python. |
image
| Container image with the Python SDK and auto-instrumentation. |
resourceRequirements
| The compute resource requirements. |
For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT
environment variable must be set if the endpoint of the exporters is set to 4317
. Python auto-instrumentation uses http/proto
by default, and the telemetry data must be set to the 4318
port.
The PodSpec
annotation to enable injection
instrumentation.opentelemetry.io/inject-python: "true"
instrumentation.opentelemetry.io/inject-python: "true"
4.1.2.7. Configuration of the OpenTelemetry SDK variables Copia collegamentoCollegamento copiato negli appunti!
The OpenTelemetry SDK variables in your pod are configurable by using the following annotation:
instrumentation.opentelemetry.io/inject-sdk: "true"
instrumentation.opentelemetry.io/inject-sdk: "true"
Note that all the annotations accept the following values:
true
-
Injects the
Instrumentation
resource from the namespace. false
- Does not inject any instrumentation.
instrumentation-name
- The name of the instrumentation resource to inject from the current namespace.
other-namespace/instrumentation-name
- The name of the instrumentation resource to inject from another namespace.
4.1.2.8. Multi-container pods Copia collegamentoCollegamento copiato negli appunti!
The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection.
Pod annotation
instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>"
instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>"
The Go auto-instrumentation does not support multi-container auto-instrumentation injection.
Chapter 5. Using the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack.
5.1. Forwarding traces to a TempoStack by using the OpenTelemetry Collector Copia collegamentoCollegamento copiato negli appunti!
To configure forwarding traces to a TempoStack, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources.
Prerequisites
- The Red Hat build of OpenTelemetry Operator is installed.
- The Tempo Operator is installed.
- A TempoStack is deployed on the cluster.
Procedure
Create a service account for the OpenTelemetry Collector.
Example ServiceAccount
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for the service account.
Example ClusterRole
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the cluster role to the service account.
Example ClusterRoleBinding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the YAML file to define the
OpenTelemetryCollector
custom resource (CR).Example OpenTelemetryCollector
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint,
"tempo-simplest-distributor:4317"
in this example, which is already created. - 2
- The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol.
You can deploy tracegen
as a test:
5.2. Sending traces and metrics to the OpenTelemetry Collector Copia collegamentoCollegamento copiato negli appunti!
Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.
5.2.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection Copia collegamentoCollegamento copiato negli appunti!
You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection.
The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector.
Prerequisites
- The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.
You have access to the cluster through the web console or the OpenShift CLI (
oc
):-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role. -
For Red Hat OpenShift Dedicated, you must have an account with the
dedicated-admin
role.
-
You are logged in to the web console as a cluster administrator with the
Procedure
Create a project for an OpenTelemetry Collector instance.
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant the permissions to the service account for the
k8sattributes
andresourcedetection
processors.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the OpenTelemetry Collector as a sidecar.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This points to the Gateway of the TempoStack instance deployed by using the
<example>
Tempo Operator.
-
Create your deployment using the
otel-collector-sidecar
service account. -
Add the
sidecar.opentelemetry.io/inject: "true"
annotation to yourDeployment
object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance.
5.2.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection Copia collegamentoCollegamento copiato negli appunti!
You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables.
Prerequisites
- The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.
You have access to the cluster through the web console or the OpenShift CLI (
oc
):-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role. -
For Red Hat OpenShift Dedicated, you must have an account with the
dedicated-admin
role.
-
You are logged in to the web console as a cluster administrator with the
Procedure
Create a project for an OpenTelemetry Collector instance.
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant the permissions to the service account for the
k8sattributes
andresourcedetection
processors.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the OpenTelemetry Collector instance with the
OpenTelemetryCollector
custom resource.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This points to the Gateway of the TempoStack instance deployed by using the
<example>
Tempo Operator.
Set the environment variables in the container with your instrumented application.
Expand Name Description Default value OTEL_SERVICE_NAME
OTEL_SERVICE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sets the value of the
service.name
resource attribute.""
OTEL_EXPORTER_OTLP_ENDPOINT
OTEL_EXPORTER_OTLP_ENDPOINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Base endpoint URL for any signal type with an optionally specified port number.
https://localhost:4317
OTEL_EXPORTER_OTLP_CERTIFICATE
OTEL_EXPORTER_OTLP_CERTIFICATE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Path to the certificate file for the TLS credentials of the gRPC client.
https://localhost:4317
OTEL_TRACES_SAMPLER
OTEL_TRACES_SAMPLER
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sampler to be used for traces.
parentbased_always_on
OTEL_EXPORTER_OTLP_PROTOCOL
OTEL_EXPORTER_OTLP_PROTOCOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Transport protocol for the OTLP exporter.
grpc
OTEL_EXPORTER_OTLP_TIMEOUT
OTEL_EXPORTER_OTLP_TIMEOUT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Maximum time interval for the OTLP exporter to wait for each batch export.
10s
OTEL_EXPORTER_OTLP_INSECURE
OTEL_EXPORTER_OTLP_INSECURE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disables client transport security for gRPC requests. An HTTPS schema overrides it.
False
Chapter 6. Troubleshooting the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
The OpenTelemetry Collector offers multiple ways to measure its health as well as investigate data ingestion issues.
6.1. Getting the OpenTelemetry Collector logs Copia collegamentoCollegamento copiato negli appunti!
You can get the logs for the OpenTelemetry Collector as follows.
Procedure
Set the relevant log level in the
OpenTelemetryCollector
custom resource (CR):config: | service: telemetry: logs: level: debug
config: | service: telemetry: logs: level: debug
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Collector’s log level. Supported values include
info
,warn
,error
, ordebug
. Defaults toinfo
.
-
Use the
oc logs
command or the web console to retrieve the logs.
6.2. Exposing the metrics Copia collegamentoCollegamento copiato negli appunti!
The OpenTelemetry Collector exposes the metrics about the data volumes it has processed. The following metrics are for spans, although similar metrics are exposed for metrics and logs signals:
otelcol_receiver_accepted_spans
- The number of spans successfully pushed into the pipeline.
otelcol_receiver_refused_spans
- The number of spans that could not be pushed into the pipeline.
otelcol_exporter_sent_spans
- The number of spans successfully sent to the destination.
otelcol_exporter_enqueue_failed_spans
- The number of spans failed to be added to the sending queue.
The operator creates a <cr_name>-collector-monitoring
telemetry service that you can use to scrape the metrics endpoint.
Procedure
Enable the telemetry service by adding the following lines in the
OpenTelemetryCollector
custom resource:config: | service: telemetry: metrics: address: ":8888"
config: | service: telemetry: metrics: address: ":8888"
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The address at which the internal collector metrics are exposed. Defaults to
:8888
.
Retrieve the metrics by running the following command, which uses the port-forwarding Collector pod:
oc port-forward <collector_pod>
$ oc port-forward <collector_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Access the metrics endpoint at
http://localhost:8888/metrics
.
6.3. Debug exporter Copia collegamentoCollegamento copiato negli appunti!
You can configure the debug exporter to export the collected data to the standard output.
Procedure
Configure the
OpenTelemetryCollector
custom resource as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use the
oc logs
command or the web console to export the logs to the standard output.
Chapter 7. Migrating from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project.
The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications.
Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments.
7.1. Migrating from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry with sidecars Copia collegamentoCollegamento copiato negli appunti!
The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar.
Prerequisites
- The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster.
- The Red Hat build of OpenTelemetry is installed.
Procedure
Configure the OpenTelemetry Collector as a sidecar.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This endpoint points to the Gateway of a TempoStack instance deployed by using the
<example>
Tempo Operator.
Create a service account for running your application.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for the permissions needed by some processors.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
resourcedetectionprocessor
requires permissions for infrastructures and infrastructures/status.
Create a
ClusterRoleBinding
to set the permissions for the service account.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy the OpenTelemetry Collector as a sidecar.
-
Remove the injected Jaeger Agent from your application by removing the
"sidecar.jaegertracing.io/inject": "true"
annotation from yourDeployment
object. -
Enable automatic injection of the OpenTelemetry sidecar by adding the
sidecar.opentelemetry.io/inject: "true"
annotation to the.spec.template.metadata.annotations
field of yourDeployment
object. - Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces.
7.2. Migrating from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecars Copia collegamentoCollegamento copiato negli appunti!
You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment.
Prerequisites
- The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster.
- The Red Hat build of OpenTelemetry is installed.
Procedure
- Configure OpenTelemetry Collector deployment.
Create the project where the OpenTelemetry Collector will be deployed.
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account for running the OpenTelemetry Collector instance.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a cluster role for setting the required permissions for the processors.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a ClusterRoleBinding to set the permissions for the service account.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OpenTelemetry Collector instance.
NoteThis collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Point your tracing endpoint to the OpenTelemetry Operator.
If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint.
Example of exporting traces by using the
jaegerexporter
with Golangexp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url)))
exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url)))
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URL points to the OpenTelemetry Collector API endpoint.
Chapter 8. Updating the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.
The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators.
When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator’s new version.
Chapter 9. Removing the Red Hat build of OpenTelemetry Copia collegamentoCollegamento copiato negli appunti!
The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows:
- Shut down all Red Hat build of OpenTelemetry pods.
- Remove any OpenTelemetryCollector instances.
- Remove the Red Hat build of OpenTelemetry Operator.
9.1. Removing an OpenTelemetry Collector instance by using the web console Copia collegamentoCollegamento copiato negli appunti!
You can remove an OpenTelemetry Collector instance in the Administrator view of the web console.
Prerequisites
-
You are logged in to the web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
- Go to Operators → Installed Operators → Red Hat build of OpenTelemetry Operator → OpenTelemetryInstrumentation or OpenTelemetryCollector.
-
To remove the relevant instance, select
→ Delete … → Delete.
- Optional: Remove the Red Hat build of OpenTelemetry Operator.
9.2. Removing an OpenTelemetry Collector instance by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can remove an OpenTelemetry Collector instance on the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Ensure that your OpenShift CLI (
Procedure
Get the name of the OpenTelemetry Collector instance by running the following command:
oc get deployments -n <project_of_opentelemetry_instance>
$ oc get deployments -n <project_of_opentelemetry_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OpenTelemetry Collector instance by running the following command:
oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>
$ oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Remove the Red Hat build of OpenTelemetry Operator.
Verification
To verify successful removal of the OpenTelemetry Collector instance, run
oc get deployments
again:oc get deployments -n <project_of_opentelemetry_instance>
$ oc get deployments -n <project_of_opentelemetry_instance>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copia collegamentoCollegamento copiato negli appunti!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.