Chapter 1. Configuring the Collector
The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file.
1.1. Deployment modes Copy linkLink copied to clipboard!
The OpenTelemetryCollector custom resource allows you to specify one of the following deployment modes for the OpenTelemetry Collector:
- Deployment
- The default.
- StatefulSet
- If you need to run stateful workloads, for example when using the Collector’s File Storage Extension or Tail Sampling Processor, use the StatefulSet deployment mode.
- DaemonSet
- If you need to scrape telemetry data from every node, for example by using the Collector’s Filelog Receiver to read container logs, use the DaemonSet deployment mode.
- Sidecar
If you need access to log files inside a container, inject the Collector as a sidecar, and use the Collector’s Filelog Receiver and a shared volume such as
emptyDir.If you need to configure an application to send telemetry data via
localhost, inject the Collector as a sidecar, and set up the Collector to forward the telemetry data to an external service via an encrypted and authenticated connection. The Collector runs in the same pod as the application when injected as a sidecar.NoteIf you choose the sidecar deployment mode, then in addition to setting the
spec.mode: sidecarfield in theOpenTelemetryCollectorcustom resource CR, you must also set thesidecar.opentelemetry.io/injectannotation as a pod annotation or namespace annotation. If you set this annotation on both the pod and namespace, the pod annotation takes precedence if it is set to eitherfalseor theOpenTelemetryCollectorCR name.As a pod annotation, the
sidecar.opentelemetry.io/injectannotation supports several values:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Supported values:
false- Does not inject the Collector. This is the default if the annotation is missing.
true-
Injects the Collector with the configuration of the
OpenTelemetryCollectorCR in the same namespace. <collector_name>-
Injects the Collector with the configuration of the
<collector_name>OpenTelemetryCollectorCR in the same namespace. <namespace>/<collector_name>-
Injects the Collector with the configuration of the
<collector_name>OpenTelemetryCollectorCR in the<namespace>namespace.
1.2. OpenTelemetry Collector configuration options Copy linkLink copied to clipboard!
The OpenTelemetry Collector consists of five types of components that access telemetry data:
- Receivers
- Processors
- Exporters
- Connectors
- Extensions
You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice, only enable the components that you need.
Example of the OpenTelemetry Collector custom resource file
- 1
- If a component is configured but not defined in the
servicesection, the component is not enabled.
| Parameter | Description | Values | Default |
|---|---|---|---|
receivers:
| A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. |
| None |
processors:
| Processors run through the received data before it is exported. By default, no processors are enabled. |
| None |
exporters:
| An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. |
| None |
connectors:
| Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. |
| None |
extensions:
| Optional components for tasks that do not involve processing telemetry data. |
| None |
service: pipelines:
|
Components are enabled by adding them to a pipeline under | ||
service:
pipelines:
traces:
receivers:
|
You enable receivers for tracing by adding them under | None | |
service:
pipelines:
traces:
processors:
|
You enable processors for tracing by adding them under | None | |
service:
pipelines:
traces:
exporters:
|
You enable exporters for tracing by adding them under | None | |
service:
pipelines:
metrics:
receivers:
|
You enable receivers for metrics by adding them under | None | |
service:
pipelines:
metrics:
processors:
|
You enable processors for metircs by adding them under | None | |
service:
pipelines:
metrics:
exporters:
|
You enable exporters for metrics by adding them under | None |
1.3. Profile signal Copy linkLink copied to clipboard!
The Profile signal is an emerging telemetry data format for observing code execution and resource consumption.
The Profile signal is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Profile signal allows you to pinpoint inefficient code down to specific functions. Such profiling allows you to precisely identify performance bottlenecks and resource inefficiencies down to the specific line of code. By correlating such high-fidelity profile data with traces, metrics, and logs, it enables comprehensive performance analysis and targeted code optimization in production environments.
Profiling can target an application or operating system:
- Using profiling to observe an application can help developers validate code performance, prevent regressions, and monitor resource consumption such as memory and CPU usage, and thus identify and improve inefficient code.
- Using profiling to observe operating systems can provide insights into the infrastructure, system calls, kernel operations, and I/O wait times, and thus help in optimizing infrastructure for efficiency and cost savings.
OpenTelemetry Collector custom resource with the enabled Profile signal
- 1
- Enables profiles by setting the
feature-gatesfield as shown here. - 2
- Configures the OTLP Receiver to set up the OpenTelemetry Collector to receive profile data via the OTLP.
- 3
- Configures where to export profiles to, such as a storage.
- 4
- Defines a profiling pipeline, including a configuration for forwarding the received profile data to an OTLP-compatible profiling back end such as Grafana Pyroscope.
1.4. Creating the required RBAC resources automatically Copy linkLink copied to clipboard!
Some Collector components require configuring the RBAC resources.
Procedure
Add the following permissions to the
opentelemetry-operator-controller-manageservice account so that the Red Hat build of OpenTelemetry Operator can create them automatically:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Target Allocator Copy linkLink copied to clipboard!
The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service.
The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Example OpenTelemetryCollector CR with the enabled Target Allocator
- 1
- When the Target Allocator is enabled, the deployment mode must be set to
statefulset. - 2
- Enables the Target Allocator. Defaults to
false. - 3
- The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the
ServiceMonitor,PodMonitorcustom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is<collector_name>-targetallocator. - 4
- Enables integration with the Prometheus
PodMonitorandServiceMonitorcustom resources. - 5
- Label selector for the Prometheus
ServiceMonitorcustom resources. When left empty, enables all service monitors. - 6
- Label selector for the Prometheus
PodMonitorcustom resources. When left empty, enables all pod monitors. - 7
- Prometheus receiver with the minimal, empty
scrape_config: []configuration option.
The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration.
RBAC configuration for the Target Allocator service account