Configuring the Collector
Setting up telemetry pipelines with receivers, processors, exporters, connectors, and extensions
Abstract
Chapter 1. Configuring the Collector Copy linkLink copied to clipboard!
The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file.
1.1. OpenTelemetry Collector deployment modes Copy linkLink copied to clipboard!
The OpenTelemetryCollector custom resource allows you to specify one of the following deployment modes for the OpenTelemetry Collector:
- Deployment
- The default.
- StatefulSet
- If you need to run stateful workloads, for example when using the Collector’s File Storage Extension or Tail Sampling Processor, use the StatefulSet deployment mode.
- DaemonSet
- If you need to scrape telemetry data from every node, for example by using the Collector’s Filelog Receiver to read container logs, use the DaemonSet deployment mode.
- Sidecar
If you need access to log files inside a container, inject the Collector as a sidecar, and use the Collector’s Filelog Receiver and a shared volume such as
emptyDir.If you need to configure an application to send telemetry data via
localhost, inject the Collector as a sidecar, and set up the Collector to forward the telemetry data to an external service via an encrypted and authenticated connection. The Collector runs in the same pod as the application when injected as a sidecar.NoteIf you choose the sidecar deployment mode, then in addition to setting the
spec.mode: sidecarfield in theOpenTelemetryCollectorcustom resource CR, you must also set thesidecar.opentelemetry.io/injectannotation as a pod annotation or namespace annotation. If you set this annotation on both the pod and namespace, the pod annotation takes precedence if it is set to eitherfalseor theOpenTelemetryCollectorCR name.As a pod annotation, the
sidecar.opentelemetry.io/injectannotation supports several values:apiVersion: v1 kind: Pod metadata: # ... annotations: sidecar.opentelemetry.io/inject: "<supported_value>" # ...where:
false- Does not inject the Collector. This is the default if the annotation is missing.
true-
Injects the Collector with the configuration of the
OpenTelemetryCollectorCR in the same namespace. <collector_name>-
Injects the Collector with the configuration of the
<collector_name>OpenTelemetryCollectorCR in the same namespace. <namespace>/<collector_name>-
Injects the Collector with the configuration of the
<collector_name>OpenTelemetryCollectorCR in the<namespace>namespace.
1.2. OpenTelemetry Collector configuration options Copy linkLink copied to clipboard!
The OpenTelemetry Collector consists of five types of components that access telemetry data:
- Receivers
- Processors
- Exporters
- Connectors
- Extensions
You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice, only enable the components that you need.
The following is an example of the OpenTelemetry Collector custom resource file:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: cluster-collector
namespace: tracing-system
spec:
mode: deployment
observability:
metrics:
enableMetrics: true
config:
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors: {}
exporters:
otlp:
endpoint: otel-collector-headless.tracing-system.svc:4317
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
prometheus:
endpoint: 0.0.0.0:8889
resource_to_telemetry_conversion:
enabled: true # by default resource attributes are dropped
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlp]
metrics:
receivers: [otlp]
processors: []
exporters: [prometheus]
where:
service-
If a component is configured but not defined in the
servicesection, the component is not enabled.
The Operator uses the following parameters to define the OpenTelemetry Collector:
| Parameter | Description | Values | Default |
|---|---|---|---|
| A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. |
| None |
| Processors run through the received data before it is exported. By default, no processors are enabled. |
| None |
| An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. |
| None |
| Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. |
| None |
| Optional components for tasks that do not involve processing telemetry data. |
| None |
|
Components are enabled by adding them to a pipeline under | ||
|
You enable receivers for tracing by adding them under | None | |
|
You enable processors for tracing by adding them under | None | |
|
You enable exporters for tracing by adding them under | None | |
|
You enable receivers for metrics by adding them under | None | |
|
You enable processors for metrics by adding them under | None | |
|
You enable exporters for metrics by adding them under | None |
1.3. Profile signal Copy linkLink copied to clipboard!
The Profile signal is an emerging telemetry data format for observing code execution and resource consumption.
The Profile signal is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Profile signal allows you to pinpoint inefficient code down to specific functions. Such profiling allows you to precisely identify performance bottlenecks and resource inefficiencies down to the specific line of code. By correlating such high-fidelity profile data with traces, metrics, and logs, it enables comprehensive performance analysis and targeted code optimization in production environments.
Profiling can target an application or operating system:
- Using profiling to observe an application can help developers validate code performance, prevent regressions, and monitor resource consumption such as memory and CPU usage, and thus identify and improve inefficient code.
- Using profiling to observe operating systems can provide insights into the infrastructure, system calls, kernel operations, and I/O wait times, and thus help in optimizing infrastructure for efficiency and cost savings.
The following is an OpenTelemetry Collector custom resource with the enabled Profile signal:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel-profiles-collector
namespace: otel-profile
spec:
args:
feature-gates: service.profilesSupport
config:
receivers:
otlp:
protocols:
grpc:
endpoint: '0.0.0.0:4317'
http:
endpoint: '0.0.0.0:4318'
exporters:
otlp/pyroscope:
endpoint: "pyroscope.pyroscope-monitoring.svc.cluster.local:4317"
service:
pipelines:
profiles:
receivers: [otlp]
exporters: [otlp/pyroscope]
# ...
where:
feature-gates-
Enables profiles by setting the
feature-gatesfield as shown here. otlp- Configures the OTLP Receiver to set up the OpenTelemetry Collector to receive profile data via the OTLP.
endpoint- Configures where to export profiles to, such as a storage.
pipelines- Defines a profiling pipeline, including a configuration for forwarding the received profile data to an OTLP-compatible profiling back end such as Grafana Pyroscope.
1.4. Creating the required RBAC resources automatically Copy linkLink copied to clipboard!
Some Collector components require configuring the RBAC resources.
Procedure
Add the following permissions to the
opentelemetry-operator-controller-manageservice account so that the Red Hat build of OpenTelemetry Operator can create them automatically:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator
1.5. Target Allocator Copy linkLink copied to clipboard!
The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances.
The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR).
When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service.
The following is an example OpenTelemetryCollector CR with the enabled Target Allocator:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: observability
spec:
mode: statefulset
targetAllocator:
enabled: true
serviceAccount:
prometheusCR:
enabled: true
scrapeInterval: 10s
serviceMonitorSelector:
name: app1
podMonitorSelector:
name: app2
config:
receivers:
prometheus:
config:
scrape_configs: []
processors:
exporters:
debug: {}
service:
pipelines:
metrics:
receivers: [prometheus]
processors: []
exporters: [debug]
# ...
where:
mode-
When the Target Allocator is enabled, the deployment mode must be set to
statefulset. enabled-
Enables the Target Allocator. Defaults to
false. serviceAccount-
Service account name of the Target Allocator deployment. The service account needs to have RBAC to get the
ServiceMonitor,PodMonitorcustom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is<collector_name>-targetallocator. enabled-
Enables integration with the Prometheus
PodMonitorandServiceMonitorcustom resources. serviceMonitorSelector-
Label selector for the Prometheus
ServiceMonitorcustom resources. When left empty, enables all service monitors. podMonitorSelector-
Label selector for the Prometheus
PodMonitorcustom resources. When left empty, enables all pod monitors. prometheus-
Prometheus receiver with the minimal, empty
scrape_config: []configuration option.
The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration.
The Target Allocator service account requires the following RBAC configuration:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-targetallocator
rules:
- apiGroups: [""]
resources:
- services
- pods
- namespaces
verbs: ["get", "list", "watch"]
- apiGroups: ["monitoring.coreos.com"]
resources:
- servicemonitors
- podmonitors
- scrapeconfigs
- probes
verbs: ["get", "list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-targetallocator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-targetallocator
subjects:
- kind: ServiceAccount
name: otel-targetallocator
namespace: observability
# ...
where:
name- Name of the Target Allocator service account.
namespace- Namespace of the Target Allocator service account.
Chapter 2. Receivers Copy linkLink copied to clipboard!
2.1. Receivers overview Copy linkLink copied to clipboard!
Receivers get data into the Collector.
A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers support one or more data sources.
Currently, the following General Availability and Technology Preview receivers are available for the Red Hat build of OpenTelemetry.
2.2. OTLP Receiver Copy linkLink copied to clipboard!
The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP).
The following is an OpenTelemetry Collector custom resource with an enabled OTLP Receiver:
# ...
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
client_ca_file: client.pem
reload_interval: 1h
http:
endpoint: 0.0.0.0:4318
tls: {}
service:
pipelines:
traces:
receivers: [otlp]
metrics:
receivers: [otlp]
# ...
where:
endpoint-
OTLP gRPC endpoint. If omitted, the default
0.0.0.0:4317is used. tls- Server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
client_ca_file-
Path to the TLS certificate at which the server verifies a client certificate. This sets the value of
ClientCAsandClientAuthtoRequireAndVerifyClientCertin theTLSConfig. For more information, see theConfigof the Golang TLS package. reload_interval-
The time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_intervalfield accepts a string containing valid units of time such asns,us,ms,s,m,h. endpoint-
OTLP HTTP endpoint. The default value is
0.0.0.0:4318. tls-
Server-side TLS configuration. For more information, see the
grpcprotocol configuration section.
2.3. Jaeger Receiver Copy linkLink copied to clipboard!
The Jaeger Receiver ingests traces in the Jaeger formats.
The following is an OpenTelemetry Collector custom resource with an enabled Jaeger Receiver:
# ...
config:
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_binary:
endpoint: 0.0.0.0:6832
tls: {}
service:
pipelines:
traces:
receivers: [jaeger]
# ...
where:
endpoint-
Jaeger gRPC endpoint. If omitted, the default
0.0.0.0:14250is used. endpoint-
Jaeger Thrift HTTP endpoint. If omitted, the default
0.0.0.0:14268is used. endpoint-
Jaeger Thrift Compact endpoint. If omitted, the default
0.0.0.0:6831is used. endpoint-
Jaeger Thrift Binary endpoint. If omitted, the default
0.0.0.0:6832is used. tls- Server-side TLS configuration. See the OTLP Receiver configuration section for more details.
2.4. Host Metrics Receiver Copy linkLink copied to clipboard!
The Host Metrics Receiver ingests metrics in the OTLP format.
The following is an OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver:
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-hostfs-daemonset
namespace: <namespace>
# ...
---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: true
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: null
defaultAddCapabilities:
- SYS_ADMIN
fsGroup:
type: RunAsAny
groups: []
metadata:
name: otel-hostmetrics
readOnlyRootFilesystem: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:<namespace>:otel-hostfs-daemonset
volumes:
- configMap
- emptyDir
- hostPath
- projected
# ...
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: <namespace>
spec:
serviceAccount: otel-hostfs-daemonset
mode: daemonset
volumeMounts:
- mountPath: /hostfs
name: host
readOnly: true
volumes:
- hostPath:
path: /
name: host
config:
receivers:
hostmetrics:
collection_interval: 10s
initial_delay: 1s
root_path: /
scrapers:
cpu: {}
memory: {}
disk: {}
service:
pipelines:
metrics:
receivers: [hostmetrics]
# ...
where:
collection_interval-
Sets the time interval for host metrics collection. If omitted, the default value is
1m. initial_delay-
Sets the initial time delay for host metrics collection. If omitted, the default value is
1s. root_path-
Configures the
root_pathso that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the sameroot_pathvalue for each instance. scrapers-
Lists the enabled host metrics scrapers. Available scrapers are
cpu,disk,load,filesystem,memory,network,paging,processes, andprocess.
2.5. Kubernetes Objects Receiver Copy linkLink copied to clipboard!
The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data.
The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver:
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-k8sobj
namespace: <namespace>
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-k8sobj
namespace: <namespace>
rules:
- apiGroups:
- ""
resources:
- events
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "events.k8s.io"
resources:
- events
verbs:
- watch
- list
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-k8sobj
subjects:
- kind: ServiceAccount
name: otel-k8sobj
namespace: <namespace>
roleRef:
kind: ClusterRole
name: otel-k8sobj
apiGroup: rbac.authorization.k8s.io
# ...
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel-k8s-obj
namespace: <namespace>
spec:
serviceAccount: otel-k8sobj
mode: deployment
config:
receivers:
k8sobjects:
auth_type: serviceAccount
objects:
- name: pods
mode: pull
interval: 30s
label_selector:
field_selector:
namespaces: [<namespace>,...]
- name: events
mode: watch
exporters:
debug:
service:
pipelines:
logs:
receivers: [k8sobjects]
exporters: [debug]
# ...
where:
name-
Resource name that this receiver observes: for example,
pods,deployments, orevents. mode-
Observation mode that this receiver uses:
pullorwatch. interval-
Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is
1h. label_selector- Label selector to define targets.
field_selector- Field selector to filter targets.
namespaces-
List of namespaces to collect events from. If omitted, the default value is
all.
2.6. Kubelet Stats Receiver Copy linkLink copied to clipboard!
The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet’s API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis.
The following is an OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver:
# ...
config:
receivers:
kubeletstats:
collection_interval: 20s
auth_type: "serviceAccount"
endpoint: "https://${env:K8S_NODE_NAME}:10250"
insecure_skip_verify: true
service:
pipelines:
metrics:
receivers: [kubeletstats]
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# ...
where:
name-
Sets the
K8S_NODE_NAMEto authenticate to the API.
The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector.
The service account requires the following permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ['']
resources: ['nodes/stats']
verbs: ['get', 'watch', 'list']
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["get"]
# ...
where:
resources-
Permissions required when using the
extra_metadata_labelsorrequest_utilizationorlimit_utilizationmetrics.
2.7. Prometheus Receiver Copy linkLink copied to clipboard!
The Prometheus Receiver scrapes the metrics endpoints.
The following is an OpenTelemetry Collector custom resource with an enabled Prometheus Receiver:
# ...
config:
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'my-app'
scrape_interval: 5s
static_configs:
- targets: ['my-app.example.svc.cluster.local:8888']
api_server:
enabled: true
server_config:
endpoint: "localhost:9090"
service:
pipelines:
metrics:
receivers: [prometheus]
# ...
where:
scrape_configs- Scrapes configurations using the Prometheus format.
job_name- Prometheus job name.
scrape_interval-
Interval for scraping the metrics data. Accepts time units. The default value is
1m. targets-
Targets at which the metrics are exposed. This example scrapes the metrics from a
my-appapplication in theexampleproject. api_server-
When enabled, the Prometheus API server is useful for troubleshooting because it hosts information about targets, service discovery, and metrics. It provides the same paths as the Prometheus agent-mode API, including
/api/v1/targets,/api/v1/targets/metadata,/api/v1/scrape_pools,/api/v1/status/config, and/metrics.
2.8. Prometheus Remote Write Receiver Copy linkLink copied to clipboard!
The Prometheus Remote Write Receiver receives metrics from Prometheus using the Remote Write protocol and converts them to the OpenTelemetry format. This receiver supports only the Prometheus Remote Write v2 protocol.
The Prometheus Remote Write Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Receiver:
# ...
config:
receivers:
prometheusremotewrite:
endpoint: 0.0.0.0:9090
# ...
service:
pipelines:
metrics:
receivers: [prometheusremotewrite]
# ...
where:
endpoint- Endpoint where the receiver listens for Prometheus Remote Write requests.
The following are the prerequisites for using this receiver with Prometheus:
- Prometheus is started with the metadata WAL records feature flag enabled.
- Prometheus Remote Write v2 Protocol is enabled in the Prometheus remote write configuration.
- Native histograms are enabled in Prometheus by using the feature flag.
- Prometheus is configured to convert classic histograms into native histograms.
For more information about enabling these Prometheus features, see the Prometheus documentation.
2.9. OTLP JSON File Receiver Copy linkLink copied to clipboard!
The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification.
The receiver watches a specified directory for changes such as created or modified files to process.
The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver:
# ...
config:
otlpjsonfile:
include:
- "/var/log/*.log"
exclude:
- "/var/log/test.log"
# ...
where:
include- Lists file path glob patterns to watch.
exclude- Lists file path glob patterns to ignore.
2.10. Zipkin Receiver Copy linkLink copied to clipboard!
The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats.
The following is an OpenTelemetry Collector custom resource with the enabled Zipkin Receiver:
# ...
config:
receivers:
zipkin:
endpoint: 0.0.0.0:9411
tls: {}
service:
pipelines:
traces:
receivers: [zipkin]
# ...
where:
endpoint-
Zipkin HTTP endpoint. If omitted, the default
0.0.0.0:9411is used. tls- Server-side TLS configuration. See the OTLP Receiver configuration section for more details.
2.11. Kafka Receiver Copy linkLink copied to clipboard!
The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format.
The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled Kafka Receiver:
# ...
config:
receivers:
kafka:
brokers: ["localhost:9092"]
protocol_version: 2.0.0
topic: otlp_spans
logs:
encoding: raw
metrics:
encoding: otlp_proto
traces:
encoding: otlp_proto
auth:
plain_text:
username: example
password: example
tls:
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
insecure: false
server_name_override: kafka.example.corp
service:
pipelines:
traces:
receivers: [kafka]
# ...
where:
brokers-
List of Kafka brokers. The default is
localhost:9092. protocol_version-
Kafka protocol version. For example,
2.0.0. This is a required field. topic-
Name of the Kafka topic to read from. The default is
otlp_spans. plain_text- Plain text authentication configuration. If omitted, plain text authentication is disabled.
tls- Client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
insecure-
Disables verifying the server’s certificate chain and host name. The default is
false. server_name_override- ServerName indicates the name of the server requested by the client to support virtual hosting.
2.12. Kubernetes Cluster Receiver Copy linkLink copied to clipboard!
The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts.
The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver:
# ...
config:
receivers:
k8s_cluster:
distribution: openshift
collection_interval: 10s
exporters:
debug: {}
service:
pipelines:
metrics:
receivers: [k8s_cluster]
exporters: [debug]
logs/entity_events:
receivers: [k8s_cluster]
exporters: [debug]
# ...
This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account.
ServiceAccount object
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: otelcontribcol
name: otelcontribcol
# ...
The ClusterRole object requires the following RBAC rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
rules:
- apiGroups:
- quota.openshift.io
resources:
- clusterresourcequotas
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
# ...
ClusterRoleBinding object
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otelcontribcol
subjects:
- kind: ServiceAccount
name: otelcontribcol
namespace: default
# ...
2.13. Filelog Receiver Copy linkLink copied to clipboard!
The Filelog Receiver tails and parses logs from files.
The following is an OpenTelemetry Collector custom resource with an enabled Filelog Receiver that tails a text file:
# ...
config:
receivers:
filelog:
include: [ /simple.log ]
operators:
- type: regex_parser
regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%d %H:%M:%S'
severity:
parse_from: attributes.sev
# ...
where:
include- List of file glob patterns that match the file paths to be read.
operators- Array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a needed format, chain the Operators together.
The next example shows how to make the Filelog Receiver work within security context constraints.
The following is an OpenTelemetry Collector custom resource with an enabled Filelog Receiver that parses cluster logs:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: otel-clusterlogs-collector-scc
allowPrivilegedContainer: false
requiredDropCapabilities:
- ALL
allowHostDirVolumePlugin: true
volumes:
- configMap
- emptyDir
- hostPath
- projected
- secret
defaultAllowPrivilegeEscalation: false
allowPrivilegeEscalation: false
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
readOnlyRootFilesystem: true
forbiddenSysctls:
- '*'
seccompProfiles:
- runtime/default
users:
- system:serviceaccount:observability:clusterlogs-collector
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: clusterlogs
namespace: observability
spec:
mode: daemonset
config:
receivers:
filelog:
include:
- "/var/log/pods/*/*/*.log"
exclude:
- "/var/log/pods/*/otc-container/*.log"
- "/var/log/pods/*/*/*.gz"
- "/var/log/pods/*/*/*.log.*"
- "/var/log/pods/*/*/*.tmp"
include_file_path: true
include_file_name: false
operators:
- type: container
exporters:
debug:
verbosity: detailed
service:
pipelines:
logs:
receivers: [filelog]
exporters: [debug]
securityContext:
runAsUser: 0
seLinuxOptions:
type: spc_t
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
volumeMounts:
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
volumes:
- name: varlogpods
hostPath:
path: /var/log/pods
where:
name: otel-clusterlogs-collector-scc- Configure a security context constraint (SCC) to allow access to files on the host.
system:serviceaccount- The OpenTelemetry Operator created this service account for the Collector. Assign the SCC to this service account.
exclude- Exclude logs from the Collector container.
You can use this receiver to collect logs from pod filesystems in one of two ways:
- Configuring the receiver in a sidecar container running alongside your application pod.
- Deploying the receiver as a DaemonSet on the host machine with appropriate permissions to access Kubernetes logs.
To collect logs from application containers, you can use this receiver with sidecar injection. The Red Hat build of OpenTelemetry Operator allows injecting an OpenTelemetry Collector as a sidecar container into an application pod. This approach is useful when your application writes logs to files within the container filesystem. This receiver can then tail log files and apply Operators to parse the logs.
To use this receiver in sidecar mode to collect logs from application containers, you must configure volume mounts in the OpenTelemetryCollector custom resource. Both the application container and the sidecar Collector must mount the same shared volume, such as emptyDir. Define the volume in the application’s Pod specification. See the following example:
The following is an OpenTelemetry Collector custom resource with the Filelog Receiver configured in sidecar mode:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: filelog
namespace: otel-logging
spec:
mode: sidecar
volumeMounts:
- name: logs
mountPath: /var/log/app
config:
receivers:
filelog:
include:
- /var/log/app/*.log
operators:
- type: regex_parser
regex: '^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) \[(?P<level>\w+)\] (?P<message>.*)$'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%d %H:%M:%S'
processors: {}
exporters:
debug:
verbosity: detailed
service:
pipelines:
logs:
receivers: [filelog]
processors: []
exporters: [debug]
where:
volumeMounts- Defines the volume mount that the sidecar Collector uses to access the target log files. This volume must match the volume name defined in the application deployment.
include- Specifies file glob patterns for matching the log files to tail. This receiver watches these paths for new log entries.
2.14. Journald Receiver Copy linkLink copied to clipboard!
The Journald Receiver parses journald events from the systemd journal and sends them as logs.
The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled Journald Receiver:
apiVersion: v1
kind: Namespace
metadata:
name: otel-journald
labels:
security.openshift.io/scc.podSecurityLabelSync: "false"
pod-security.kubernetes.io/enforce: "privileged"
pod-security.kubernetes.io/audit: "privileged"
pod-security.kubernetes.io/warn: "privileged"
# ...
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: privileged-sa
namespace: otel-journald
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-journald-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
name: privileged-sa
namespace: otel-journald
# ...
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel-journald-logs
namespace: otel-journald
spec:
mode: daemonset
serviceAccount: privileged-sa
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- NET_BIND_SERVICE
- SETGID
- SETPCAP
- SETUID
readOnlyRootFilesystem: true
seLinuxOptions:
type: spc_t
seccompProfile:
type: RuntimeDefault
config:
receivers:
journald:
files: /var/log/journal/*/*
priority: info
units:
- kubelet
- crio
- init.scope
- dnsmasq
all: true
retry_on_failure:
enabled: true
initial_interval: 1s
max_interval: 30s
max_elapsed_time: 5m
processors:
exporters:
debug: {}
service:
pipelines:
logs:
receivers: [journald]
exporters: [debug]
volumeMounts:
- name: journal-logs
mountPath: /var/log/journal/
readOnly: true
volumes:
- name: journal-logs
hostPath:
path: /var/log/journal
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
# ...
where:
priority-
Filters output by message priorities or priority ranges. The default value is
info. units- Lists the units to read entries from. If empty, entries are read from all units.
all-
Includes very long logs and logs with unprintable characters. The default value is
false. enabled-
If set to
true, the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value isfalse. initial_interval-
Time interval to wait after the first failure before retrying. The default value is
1s. The units arems,s,m,h. max_interval-
Upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is
30s. The supported units arems,s,m, andh. max_elapsed_time-
Maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is
0, retrying never stops. The default value is5m. The supported units arems,s,m, andh.
2.15. Kubernetes Events Receiver Copy linkLink copied to clipboard!
The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs.
OpenShift Container Platform permissions required for the Kubernetes Events Receiver
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
labels:
app: otel-collector
rules:
- apiGroups:
- ""
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
# ...
The following is an OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver:
# ...
serviceAccount: otel-collector
config:
receivers:
k8s_events:
namespaces: [project1, project2]
service:
pipelines:
logs:
receivers: [k8s_events]
# ...
where:
serviceAccount-
Service account of the Collector that has the required ClusterRole
otel-collectorRBAC. namespaces- List of namespaces to collect events from. The default value is empty, which means that all namespaces are collected.
Chapter 3. Processors Copy linkLink copied to clipboard!
3.1. Processors overview Copy linkLink copied to clipboard!
Processors process data between when it is received and when it is exported.
Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.
Currently, the following General Availability and Technology Preview processors are available for the Red Hat build of OpenTelemetry.
3.2. Batch Processor Copy linkLink copied to clipboard!
The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information.
The following is an example of the OpenTelemetry Collector custom resource when using the Batch Processor:
# ...
config:
processors:
batch:
timeout: 5s
send_batch_max_size: 10000
service:
pipelines:
traces:
processors: [batch]
metrics:
processors: [batch]
# ...
The Batch Processor uses the following parameters:
| Parameter | Description | Default |
|---|---|---|
|
| Sends the batch after a specific time duration and irrespective of the batch size. |
|
|
| Sends the batch of telemetry data after the specified number of spans or metrics. |
|
|
|
The maximum allowable size of the batch. Must be equal or greater than the |
|
|
|
When activated, a batcher instance is created for each unique set of values found in the |
|
|
|
When the |
|
3.3. Memory Limiter Processor Copy linkLink copied to clipboard!
The Memory Limiter Processor periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached.
The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run.
This processor supports traces, metrics, and logs.
The following is an example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor:
# ...
config:
processors:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
service:
pipelines:
traces:
processors: [batch]
metrics:
processors: [batch]
# ...
The Memory Limiter Processor uses the following parameters:
| Parameter | Description | Default |
|---|---|---|
|
|
Time between memory usage measurements. The optimal value is |
|
|
| The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. |
|
|
|
Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of |
20% of |
|
|
Same as the |
|
|
|
Same as the |
|
3.4. Resource Detection Processor Copy linkLink copied to clipboard!
The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry’s resource semantic standards.
Using the detected information, this processor can add or replace the resource values in telemetry data.
You can use this processor with multiple detectors such as the Docker metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector.
This processor supports traces and metrics.
The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenShift Container Platform permissions required for the Resource Detection Processor
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ["config.openshift.io"]
resources: ["infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
# ...
The following OpenTelemetry Collector custom resource uses the Resource Detection Processor:
# ...
config:
processors:
resourcedetection:
detectors: [openshift]
override: true
service:
pipelines:
traces:
processors: [resourcedetection]
metrics:
processors: [resourcedetection]
# ...
The following OpenTelemetry Collector custom resource uses the Resource Detection Processor with an environment variable detector:
# ...
config:
processors:
resourcedetection/env:
detectors: [env]
timeout: 2s
override: false
# ...
where:
detectors- Specifies which detector to use. In this example, the environment detector is specified.
3.5. Attributes Processor Copy linkLink copied to clipboard!
The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions.
This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported:
- Insert
- Inserts a new attribute into the input data when the specified key does not already exist.
- Update
- Updates an attribute in the input data if the key already exists.
- Upsert
- Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists.
- Delete
- Removes an attribute from the input data.
- Hash
- Hashes an existing attribute value as SHA1.
- Extract
-
Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor’s
to_attributessetting with the existing attribute as the source. - Convert
- Converts an existing attribute to a specified type.
The following OpenTelemetry Collector custom resource uses the Attributes Processor:
# ...
config:
processors:
attributes/example:
actions:
- key: db.table
action: delete
- key: redacted_span
value: true
action: upsert
- key: copy_key
from_attribute: key_original
action: update
- key: account_id
value: 2245
action: insert
- key: account_password
action: delete
- key: account_email
action: hash
- key: http.status_code
action: convert
converted_type: int
# ...
3.6. Resource Processor Copy linkLink copied to clipboard!
The Resource Processor applies changes to the resource attributes.
This processor supports traces, metrics, and logs.
The following OpenTelemetry Collector custom resource uses the Resource Processor:
# ...
config:
processors:
attributes:
- key: cloud.availability_zone
value: "zone-1"
action: upsert
- key: k8s.cluster.name
from_attribute: k8s-cluster
action: insert
- key: redundant-attribute
action: delete
# ...
where:
attributes- Actions applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute.
3.7. Span Processor Copy linkLink copied to clipboard!
The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces.
Span renaming requires specifying attributes for the new name by using the from_attributes configuration.
The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following OpenTelemetry Collector custom resource uses the Span Processor to rename a span:
# ...
config:
processors:
span:
name:
from_attributes: [<key1>, <key2>, ...]
separator: <value>
# ...
where:
from_attributes- Defines the keys to form the new span name.
separator- Optional separator.
You can use this processor to extract attributes from the span name.
The following OpenTelemetry Collector custom resource uses the Span Processor to extract attributes from a span name:
# ...
config:
processors:
span/to_attributes:
name:
to_attributes:
rules:
- ^\/api\/v1\/document\/(?P<documentId>.*)\/update$
# ...
where:
<documentId>-
This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a
documentIDattribute is created. In this example, if the input span name is/api/v1/document/12345678/update, this results in the/api/v1/document/{documentId}/updateoutput span name, and a new"documentId"="12345678"attribute is added to the span.
You can have the span status modified.
The following OpenTelemetry Collector custom resource uses the Span Processor to change the status:
# ...
config:
processors:
span/set_status:
status:
code: Error
description: "<error_description>"
# ...
3.8. Kubernetes Attributes Processor Copy linkLink copied to clipboard!
The Kubernetes Attributes Processor enables automatic attachment of Kubernetes metadata as resource attributes to spans, metrics, and logs.
This processor supports traces, metrics, and logs.
This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata.
The Kubernetes Attributes Processor requires the following minimum OpenShift Container Platform permissions:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: otel-collector
rules:
- apiGroups: ['']
resources: ['pods', 'namespaces']
verbs: ['get', 'watch', 'list']
- apiGroups: ['apps']
resources: ['replicasets']
verbs: ['get', 'watch', 'list']
The following OpenTelemetry Collector custom resource uses the Kubernetes Attributes Processor:
# ...
spec:
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
config:
processors:
k8sattributes:
filter:
namespace:
node_from_env_var: KUBE_NODE_NAME
# ...
where:
filter- Optional: Filters resource objects cached for metadata extraction, reducing API requests and memory usage.
3.9. Filter Processor Copy linkLink copied to clipboard!
The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator.
This processor supports traces, metrics, and logs.
The following is an OpenTelemetry Collector custom resource with an enabled Filter Processor:
# ...
config:
processors:
filter/ottl:
error_mode: ignore
traces:
span:
- 'attributes["container.name"] == "app_container_1"'
- 'resource.attributes["host.name"] == "localhost"'
# ...
where:
error_mode-
Defines the error mode. When set to
ignore, ignores errors returned by conditions. When set topropagate, returns the error up the pipeline. An error causes the payload to be dropped from the Collector. attributes-
Filters the spans that have the
container.name == app_container_1attribute. resource.attributes-
Filters the spans that have the
host.name == localhostresource attribute.
3.10. Cumulative-to-Delta Processor Copy linkLink copied to clipboard!
The Cumulative-to-Delta Processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics.
You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching.
Because this processor calculates delta by storing the previous value of a metric, you must set up the metric source to send the metric data to a single stateful Collector instance rather than a deployment of multiple Collectors.
This processor does not convert non-monotonic sums and exponential histograms.
The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor:
# ...
mode: sidecar
config:
processors:
cumulativetodelta:
include:
match_type: strict
metrics:
- <metric_1_name>
- <metric_2_name>
exclude:
match_type: regexp
metrics:
- "<regular_expression_for_metric_names>"
# ...
where:
mode- To tie the Collector’s lifecycle to the metric source, you can run the Collector as a sidecar to the application that emits the cumulative temporality metrics.
include-
Optional: You can limit which metrics the processor converts by explicitly defining which metrics you want converted in this stanza. If you omit this field, the processor converts all metrics, except the metrics that are listed in the
excludefield. match_type-
Defines the value that you provided in the
metricsfield as an exact match by using thestrictparameter or a regular expression by using theregexparameter. metrics-
Lists the names of the metrics that you want to convert. The processor converts exact matches or matches for regular expressions. If a metric matches both the
includeandexcludefilters, theexcludefilter takes precedence. exclude- Optional: You can exclude certain metrics from conversion by explicitly defining them here.
3.11. Group-by-Attributes Processor Copy linkLink copied to clipboard!
The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes.
The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example:
The following is an example of the OpenTelemetry Collector custom resource when using the Group-by-Attributes Processor:
# ...
config:
processors:
groupbyattrs:
keys:
- <key1>
- <key2>
# ...
where:
keys- Attribute keys to group by.
<key1>- If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated.
3.12. Transform Processor Copy linkLink copied to clipboard!
The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL). For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed.
All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements.
The following is a configuration summary:
# ...
config:
processors:
transform:
error_mode: ignore
<trace|metric|log>_statements:
- context: <string>
conditions:
- <string>
- <string>
statements:
- <string>
- <string>
- <string>
- context: <string>
statements:
- <string>
- <string>
- <string>
# ...
where:
error_mode-
Optional: See the following table "Values for the optional
error_modefield". _statements- Indicates a signal to be transformed.
context-
See the following table "Values for the
contextfield". conditions- Optional: Conditions for performing a transformation.
The following is an example of the OpenTelemetry Collector custom resource when using the Transform Processor:
# ...
config:
transform:
error_mode: ignore
trace_statements:
- context: resource
statements:
- keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"])
- replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***")
- limit(attributes, 100, [])
- truncate_all(attributes, 4096)
- context: span
statements:
- set(status.code, 1) where attributes["http.path"] == "/health"
- set(name, attributes["http.route"])
- replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}")
- limit(attributes, 100, [])
- truncate_all(attributes, 4096)
# ...
where:
trace_statements- Transforms a trace signal.
keep_keys- Keeps keys on the resources.
replace_pattern- Replaces attributes and replaces string characters in password fields with asterisks.
context- Performs transformations at the span level.
The context field accepts the following values:
| Signal Statement | Valid Contexts |
|---|---|
|
|
|
|
|
|
|
|
|
The optional error_mode field accepts the following values:
| Value | Description |
|---|---|
|
| Ignores and logs errors returned by statements and then continues to the next statement. |
|
| Ignores and doesn’t log errors returned by statements and then continues to the next statement. |
|
| Returns errors up the pipeline and drops the payload. Implicit default. |
3.13. Tail Sampling Processor Copy linkLink copied to clipboard!
The Tail Sampling Processor samples traces according to user-defined policies when all of the spans are completed. Tail-based sampling enables you to filter the traces of interest and reduce your data ingestion and storage costs.
The Tail Sampling Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This processor reassembles spans into new batches and strips spans of their original context.
- In pipelines, place this processor downstream of any processors that rely on context: for example, after the Kubernetes Attributes Processor.
- If scaling the Collector, ensure that one Collector instance receives all spans of the same trace so that this processor makes correct sampling decisions based on the specified sampling policies. You can achieve this by setting up two layers of Collectors: the first layer of Collectors with the Load Balancing Exporter, and the second layer of Collectors with the Tail Sampling Processor.
The following is an example of the OpenTelemetry Collector custom resource when using the Tail Sampling Processor:
# ...
config:
processors:
tail_sampling:
decision_wait: 30s
num_traces: 50000
expected_new_traces_per_sec: 10
policies:
[
{
<definition_of_policy_1>
},
{
<definition_of_policy_2>
},
{
<definition_of_policy_3>
},
]
# ...
where:
tail_sampling- Processor name.
decision_wait-
Optional: Decision delay time, counted from the time of the first span, before the processor makes a sampling decision on each trace. Defaults to
30s. num_traces-
Optional: The number of traces kept in memory. Defaults to
50000. expected_new_traces_per_sec-
Optional: The expected number of new traces per second, which is helpful for allocating data structures. Defaults to
0. policies- Definitions of the policies for trace evaluation. The processor evaluates each trace against all of the specified policies and then either samples or drops the trace.
You can choose and combine policies from the following list:
The following policy samples all traces:
# ... policies: [ { name: <always_sample_policy>, type: always_sample, }, ] # ...The following policy samples only traces of a duration that is within a specified range:
# ... policies: [ { name: <latency_policy>, type: latency, latency: {threshold_ms: 5000, upper_threshold_ms: 10000} }, ] # ...where:
latency-
Provided
5000and10000values are examples. You can estimate the desired latency values by looking at the earliest start time value and latest end time value. If you omit theupper_threshold_msfield, this policy samples all latencies greater than the specifiedthreshold_msvalue.
The following policy samples traces by numeric value matches for resource and record attributes:
# ... policies: [ { name: <numeric_attribute_policy>, type: numeric_attribute, numeric_attribute: {key: <key1>, min_value: 50, max_value: 100} }, ] # ...where:
numeric_attribute-
Provided
50and100values are examples.
The following policy samples only a percentage of traces:
# ... policies: [ { name: <probabilistic_policy>, type: probabilistic, probabilistic: {sampling_percentage: 10} }, ] # ...where:
probabilistic-
Provided
10value is an example.
The following policy samples traces by the status code:
OK,ERROR, orUNSET:# ... policies: [ { name: <status_code_policy>, type: status_code, status_code: {status_codes: [ERROR, UNSET]} }, ] # ...The following policy samples traces by string value matches for resource and record attributes:
# ... policies: [ { name: <string_attribute_policy>, type: string_attribute, string_attribute: {key: <key2>, values: [<value1>, <val>*], enabled_regex_matching: true, cache_max_size: 10} }, ] # ...where:
string_attribute-
This policy definition supports both exact and regular-expression value matches. The provided
10value in thecache_max_sizefield is an example.
The following policy samples traces by the rate of spans per second:
# ... policies: [ { name: <rate_limiting_policy>, type: rate_limiting, rate_limiting: {spans_per_second: 35} }, ] # ...where:
rate_limiting-
Provided
35value is an example.
The following policy samples traces by the minimum and maximum number of spans inclusively:
# ... policies: [ { name: <span_count_policy>, type: span_count, span_count: {min_spans: 2, max_spans: 20} }, ] # ...where:
span_count-
If the sum of all spans in the trace is outside the range threshold, the trace is not sampled. The provided
2and20values are examples.
The following policy samples traces by
TraceStatevalue matches:# ... policies: [ { name: <trace_state_policy>, type: trace_state, trace_state: { key: <key3>, values: [<value1>, <value2>] } }, ] # ...The following policy samples traces by a boolean attribute (resource and record):
# ... policies: [ { name: <bool_attribute_policy>, type: boolean_attribute, boolean_attribute: {key: <key4>, value: true} }, ] # ...The following policy samples traces by a given boolean OTTL condition for a span or span event:
# ... policies: [ { name: <ottl_policy>, type: ottl_condition, ottl_condition: { error_mode: ignore, span: [ "attributes[\"<test_attr_key_1>\"] == \"<test_attr_value_1>\"", "attributes[\"<test_attr_key_2>\"] != \"<test_attr_value_1>\"", ], spanevent: [ "name != \"<test_span_event_name>\"", "attributes[\"<test_event_attr_key_2>\"] != \"<test_event_attr_value_1>\"", ] } }, ] # ...The following is an
ANDpolicy that samples traces based on a combination of multiple policies:# ... policies: [ { name: <and_policy>, type: and, and: { and_sub_policy: [ { name: <and_policy_1>, type: numeric_attribute, numeric_attribute: { key: <key1>, min_value: 50, max_value: 100 } }, { name: <and_policy_2>, type: string_attribute, string_attribute: { key: <key2>, values: [ <value1>, <value2> ] } }, ] } }, ] # ...where:
numeric_attribute-
Provided
50and100values are examples.
The following is a
DROPpolicy that drops traces from sampling based on a combination of multiple policies:# ... policies: [ { name: <drop_policy>, type: drop, drop: { drop_sub_policy: [ { name: <drop_policy_1>, type: string_attribute, string_attribute: {key: url.path, values: [\/health, \/metrics], enabled_regex_matching: true} } ] } }, ] # ...The following policy samples traces by a combination of the previous samplers and with ordering and rate allocation per sampler:
# ... policies: [ { name: <composite_policy>, type: composite, composite: { max_total_spans_per_second: 100, policy_order: [<composite_policy_1>, <composite_policy_2>, <composite_policy_3>], composite_sub_policy: [ { name: <composite_policy_1>, type: numeric_attribute, numeric_attribute: {key: <key1>, min_value: 50} }, { name: <composite_policy_2>, type: string_attribute, string_attribute: {key: <key2>, values: [<value1>, <value2>]} }, { name: <composite_policy_3>, type: always_sample } ], rate_allocation: [ { policy: <composite_policy_1>, percent: 50 }, { policy: <composite_policy_2>, percent: 25 } ] } }, ] # ...where:
percent-
Allocates percentages of spans according to the order of applied policies. For example, if you set the
100value in themax_total_spans_per_secondfield, you can set the following values in therate_allocationsection: the50percent value in thepolicy: <composite_policy_1>section to allocate 50 spans per second, and the25percent value in thepolicy: <composite_policy_2>section to allocate 25 spans per second. To fill the remaining capacity, you can set thealways_samplevalue in thetypefield of thename: <composite_policy_3>section.
3.14. Probabilistic Sampling Processor Copy linkLink copied to clipboard!
If you handle high volumes of telemetry data and seek to reduce costs by reducing processed data volumes, you can use the Probabilistic Sampling Processor as an alternative to the Tail Sampling Processor.
Probabilistic Sampling Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The processor samples a specified percentage of trace spans or log records statelessly and per request.
The processor adds the information about the used effective sampling probability into the telemetry data:
-
In trace spans, the processor encodes the threshold and optional randomness information in the W3C Trace Context
tracestatefields. - In log records, the processor encodes the threshold and randomness information as attributes.
The following is an example OpenTelemetryCollector custom resource configuration for the Probabilistic Sampling Processor for sampling trace spans:
# ...
config:
processors:
probabilistic_sampler:
sampling_percentage: 15.3
mode: "proportional"
hash_seed: 22
sampling_precision: 14
fail_closed: true
# ...
service:
pipelines:
traces:
processors: [probabilistic_sampler]
# ...
where:
probabilistic_sampler- For trace pipelines, the source of randomness is the hashed value of the span trace ID.
sampling_percentage- Required. Accepts a 32-bit floating point percentage value at which spans are to be sampled.
mode-
Optional. Accepts a supported string value for a sampling logic mode: the default
hash_seed,proportional, orequalizing. Thehash_seedmode applies the Fowler–Noll–Vo (FNV) hash function to the trace ID and weighs the hashed value against the sampling percentage value. You can also use thehash_seedmode with units of telemetry other than the trace ID. Theproportionalmode samples a strict, probability-based ratio of the total span quantity, and is based on the OpenTelemetry and World Wide Web Consortium specifications. Theequalizingmode is useful for lowering the sampling probability to a minimum value across a whole pipeline or applying a uniform sampling probability in Collector deployments where client SDKs have mixed sampling configurations. hash_seed-
Optional. Accepts a 32-bit unsigned integer, which is used to compute the hash algorithm. When this field is not configured, the default seed value is
0. If you use multiple tiers of Collector instances, you must configure all Collectors of the same tier to the same seed value. sampling_precision-
Optional. Determines the number of hexadecimal digits used to encode the sampling threshold. Accepts an integer value. The supported values are
1-14. The default value4causes the threshold to be rounded if it contains more than 16 significant bits, which is the case of theproportionalmode that uses 56 bits. If you select theproportionalmode, use a greater value for the purpose of preserving precision applied by preceding samplers. fail_closed-
Optional. Rejects spans with sampling errors. Accepts a boolean value. The default value is
true.
The following is an example OpenTelemetryCollector custom resource configuration for the Probabilistic Sampling Processor for sampling log records:
# ...
config:
processors:
probabilistic_sampler/logs:
sampling_percentage: 15.3
mode: "hash_seed"
hash_seed: 22
sampling_precision: 4
attribute_source: "record"
from_attribute: "<log_record_attribute_name>"
fail_closed: true
# ...
service:
pipelines:
logs:
processors: [ probabilistic_sampler/logs ]
# ...
where:
sampling_percentage- Required. Accepts a 32-bit floating point percentage value at which spans are to be sampled.
mode-
Optional. Accepts a supported string value for a sampling logic mode: the default
hash_seed,equalizing, orproportional. Thehash_seedmode applies the Fowler–Noll–Vo (FNV) hash function to the trace ID or a specified log record attribute and then weighs the hashed value against the sampling percentage value. You can also usehash_seedmode with other units of telemetry than trace ID, for example to use theservice.instance.idresource attribute for collecting log records from a percentage of pods. Theequalizingmode is useful for lowering the sampling probability to a minimum value across a whole pipeline or applying a uniform sampling probability in Collector deployments where client SDKs have mixed sampling configurations. Theproportionalmode samples a strict, probability-based ratio of the total span quantity, and is based on the OpenTelemetry and World Wide Web Consortium specifications. hash_seed-
Optional. Accepts a 32-bit unsigned integer, which is used to compute the hash algorithm. When this field is not configured, the default seed value is
0. If you use multiple tiers of Collector instances, you must configure all Collectors of the same tier to the same seed value. sampling_precision-
Optional. Determines the number of hexadecimal digits used to encode the sampling threshold. Accepts an integer value. The supported values are
1-14. The default value4causes the threshold to be rounded if it contains more than 16 significant bits, which is the case of theproportionalmode that uses 56 bits. If you select theproportionalmode, use a greater value for the purpose of preserving precision applied by preceding samplers. attribute_source-
Optional. Defines where to look for the log record attribute in
from_attribute. The log record attribute is used as the source of randomness. Accept the defaulttraceIDvalue or therecordvalue. from_attribute-
Optional. The name of a log record attribute to be used to compute the sampling hash, such as a unique log record ID. Accepts a string value. The default value is
"". Use this field only if you need to specify a log record attribute as the source of randomness in those situations where the trace ID is absent or trace ID sampling is disabled or theattribute_sourcefield is set to therecordvalue. fail_closed-
Optional. Rejects spans with sampling errors. Accepts a boolean value. The default value is
true.
3.15. Metric Start Time Processor Copy linkLink copied to clipboard!
The Metric Start Time Processor sets start times for metric points that have cumulative aggregation temporality.
You can use this processor to add start times to cumulative metrics after the Prometheus Receiver, which produces metric points without start times.
This processor can provide several benefits:
- Improve historical data analysis by adding start time data for cumulative values.
- Enable the back end to accurately calculate request rates per minute.
- Enable threshold-based alerts.
- Enable the use of back ends that require metric start times.
The following is an example of an OpenTelemetry Collector custom resource when using the Metric Start Time Processor:
# ...
config:
processors:
metricstarttime:
strategy: start_time_metric
gc_interval: 10m
start_time_metric_regex:
# ...
where:
strategy-
Defines the strategy for setting start times. Valid values are
true_reset_point,subtract_initial_point, andstart_time_metric. The default value istrue_reset_point. gc_interval- The interval at which the processor checks for inactive resources and removes them from the cache to free memory.
start_time_metric_regex-
A regular expression to match metrics that contain the start time. This parameter is only applicable when
strategy: start_time_metric. The default value isprocess_start_time.
The following table describes the available strategies:
| Strategy | Description |
|---|---|
|
| Creates a stream starting with a true reset point where the start time is set to the end timestamp. This strategy preserves absolute values and enables correct rate calculations. This is the default strategy. This strategy is stateful and requires using the sidecar Collector mode to ensure that a single Collector instance processes metrics from each application. |
|
| Drops the first point, subtracts its value from subsequent points, and uses the initial timestamp as the start time. This strategy preserves cumulative semantics and produces correct rates but modifies absolute values. This strategy is stateful and requires using the sidecar Collector mode to ensure that a single Collector instance processes metrics from each application. |
|
|
Uses the |
Chapter 4. Exporters Copy linkLink copied to clipboard!
4.1. Exporters overview Copy linkLink copied to clipboard!
Exporters send data to one or more back ends or destinations.
An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.
Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry.
4.2. OTLP gRPC Exporter Copy linkLink copied to clipboard!
The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).
The following is an OpenTelemetry Collector custom resource with the enabled OTLP gRPC Exporter:
# ...
config:
exporters:
otlp_grpc/traces:
endpoint: tempo-ingester:4317
tls:
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
insecure: false
insecure_skip_verify: false
reload_interval: 1h
server_name_override: <name>
headers:
X-Scope-OrgID: "dev"
service:
pipelines:
traces:
exporters: [otlp_grpc/traces]
metrics:
exporters: [otlp_grpc]
# ...
where:
endpoint-
OTLP gRPC endpoint. If the
https://scheme is used, then client transport security is enabled and overrides theinsecuresetting in thetls. tls- Client-side TLS configuration. Defines paths to TLS certificates.
insecure-
Disables client transport security when set to
true. The default value isfalse. insecure_skip_verify-
Skips verifying the certificate when set to
true. The default value isfalse. reload_interval-
The time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_intervalaccepts a string containing valid units of time such asns,usorµs,ms,s,m,h. server_name_override- Overrides the virtual hostname of authority such as the authority header field in requests. You can use this for testing.
headers- Headers are sent for every request performed during an established connection.
4.3. OTLP HTTP Exporter Copy linkLink copied to clipboard!
The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).
The following is an OpenTelemetry Collector custom resource with the enabled OTLP Exporter:
# ...
config:
exporters:
otlp_http/traces:
endpoint: http://tempo-ingester:4318
tls:
headers:
X-Scope-OrgID: "dev"
disable_keep_alives: false
service:
pipelines:
traces:
exporters: [otlp_http/traces]
metrics:
exporters: [otlp_http]
# ...
where:
endpoint-
OTLP HTTP endpoint. If the
https://scheme is used, then client transport security is enabled and overrides theinsecuresetting in thetls. tls- Client-side TLS configuration. Defines paths to TLS certificates.
headers- Headers are sent in every HTTP request.
disable_keep_alives- If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request.
4.4. Debug Exporter Copy linkLink copied to clipboard!
The Debug Exporter prints traces and metrics to the standard output.
The following is an OpenTelemetry Collector custom resource with the enabled Debug Exporter:
# ...
config:
exporters:
debug:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
use_internal_logger: true
service:
pipelines:
traces:
exporters: [debug]
metrics:
exporters: [debug]
# ...
where:
verbosity-
Verbosity of the debug export:
detailed,normal, orbasic. When set todetailed, pipeline data are verbosely logged. Defaults tonormal. sampling_initial-
Initial number of messages logged per second. The default value is
2messages per second. sampling_thereafter-
Sampling rate after the initial number of messages, the value in
sampling_initial, has been logged. Disabled by default with the default1value. Sampling is enabled with values greater than1. For more information, see the page for the sampler function in thezapcorepackage on the Go Project’s website. use_internal_logger-
When set to
true, enables output from the Collector’s internal logger for the exporter.
4.5. Load Balancing Exporter Copy linkLink copied to clipboard!
The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration.
The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter:
# ...
config:
exporters:
loadbalancing:
routing_key: "service"
protocol:
otlp:
timeout: 1s
resolver:
static:
hostnames:
- backend-1:4317
- backend-2:4317
dns:
hostname: otelcol-headless.observability.svc.cluster.local
k8s:
service: lb-svc.kube-public
ports:
- 15317
- 16317
# ...
where:
routing_key-
Exports spans for the same service name to the same Collector instance to provide accurate aggregation. The
routing_key: traceIDexports spans based on theirtraceID. The implicit default istraceIDbased routing. otlp- OTLP is the only supported load balancing protocol. All options of the OTLP exporter are supported.
resolver- You can configure only one resolver.
static- Static resolver distributes the load across the listed endpoints.
dns- You can use the DNS resolver only with a Kubernetes headless service.
k8s- Kubernetes resolver is recommended.
4.6. Prometheus Exporter Copy linkLink copied to clipboard!
The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats.
The following is an OpenTelemetry Collector custom resource with the enabled Prometheus Exporter:
# ...
config:
exporters:
prometheus:
endpoint: 0.0.0.0:8889
tls:
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
namespace: prefix
const_labels:
label1: value1
enable_open_metrics: true
resource_to_telemetry_conversion:
enabled: true
metric_expiration: 180m
add_metric_suffixes: false
service:
pipelines:
metrics:
exporters: [prometheus]
# ...
where:
endpoint-
Network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the
endpointfield to the<instance_name>-collectorservice. tls- Server-side TLS configuration. Defines paths to TLS certificates.
namespace- If set, exports metrics under the provided value.
const_labels- Key-value pair labels that are applied for every exported metric.
enable_open_metrics-
If
true, metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such ascounter. Disabled by default. resource_to_telemetry_conversion-
If
enabledistrue, all the resource attributes are converted to metric labels. Disabled by default. metric_expiration-
Defines how long metrics are exposed without updates. The default is
5m. add_metric_suffixes-
Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is
true.
When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true, the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics.
4.7. Prometheus Remote Write Exporter Copy linkLink copied to clipboard!
The Prometheus Remote Write Exporter exports metrics to compatible back ends.
The following is an OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter:
# ...
config:
exporters:
prometheusremotewrite:
endpoint: "https://my-prometheus:7900/api/v1/push"
tls:
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
target_info: true
export_created_metric: true
max_batch_size_bytes: 3000000
service:
pipelines:
metrics:
exporters: [prometheusremotewrite]
# ...
where:
endpoint- Endpoint for sending the metrics.
tls- Server-side TLS configuration. Defines paths to TLS certificates.
target_info-
When set to
true, creates atarget_infometric for each resource metric. export_created_metric-
When set to
true, exports a_createdmetric for the Summary, Histogram, and Monotonic Sum metric points. max_batch_size_bytes-
Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is
3000000, which is approximately 2.861 megabytes.
- This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics.
-
You must enable the
--web.enable-remote-write-receiverfeature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails.
4.8. Kafka Exporter Copy linkLink copied to clipboard!
The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency.
The following is an OpenTelemetry Collector custom resource with the enabled Kafka Exporter:
# ...
config:
exporters:
kafka:
brokers: ["localhost:9092"]
protocol_version: 2.0.0
topic: otlp_spans
logs:
encoding: raw
metrics:
encoding: otlp_proto
traces:
encoding: otlp_proto
auth:
plain_text:
username: example
password: example
tls:
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
insecure: false
server_name_override: kafka.example.corp
service:
pipelines:
traces:
exporters: [kafka]
# ...
where:
brokers-
List of Kafka brokers. The default is
localhost:9092. protocol_version-
Kafka protocol version. For example,
2.0.0. This is a required field. topic-
Name of the Kafka topic to read from. The following are the defaults:
otlp_spansfor traces,otlp_metricsfor metrics,otlp_logsfor logs. plain_text- Plain text authentication configuration. If omitted, plain text authentication is disabled.
tls- Client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
insecure-
Disables verifying the server’s certificate chain and hostname. The default is
false. server_name_override- Name of the server requested by the client to support virtual hosting.
4.9. AWS CloudWatch Logs Exporter Copy linkLink copied to clipboard!
The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain.
The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter:
# ...
config:
exporters:
awscloudwatchlogs:
log_group_name: "<group_name_of_amazon_cloudwatch_logs>"
log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>"
region: <aws_region_of_log_stream>
endpoint: <protocol><service_endpoint_of_amazon_cloudwatch_logs>
log_retention: <supported_value_in_days>
role_arn: "<iam_role>"
# ...
where:
log_group_name- Required. If the log group does not exist yet, it is automatically created.
log_stream_name- Required. If the log stream does not exist yet, it is automatically created.
region- Optional. If the AWS region is not already set in the default credential chain, you must specify it.
endpoint-
Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). log_retention-
Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to
0, the logs never expire by default. Supported values for retention in days are1,3,5,7,14,30,60,90,120,150,180,365,400,545,731,1827,2192,2557,2922,3288, or3653. role_arn- Optional. The AWS Identity and Access Management (IAM) role for uploading the log segments to a different account.
4.10. AWS EMF Exporter Copy linkLink copied to clipboard!
The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF):
-
Int64DataPoints -
DoubleDataPoints -
SummaryDataPoints
The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents API.
One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter:
# ...
config:
exporters:
awsemf:
log_group_name: "<group_name_of_amazon_cloudwatch_logs>"
log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>"
resource_to_telemetry_conversion:
enabled: true
region: <region>
endpoint: <protocol><endpoint>
log_retention: <supported_value_in_days>
namespace: <custom_namespace>
role_arn: "<iam_role>"
# ...
where:
log_group_nameYou can use the
log_group_nameparameter to customize the log group name or set the default/metrics/defaultvalue or the following placeholders:The
/aws/metrics/{ClusterName}placeholder is for searching for theClusterNameoraws.ecs.cluster.nameresource attribute in the metrics data and replacing it with the actual cluster name.The
{NodeName}placeholder is for searching for theNodeNameork8s.node.nameresource attribute.The
{TaskId}placeholder is for searching for theTaskIdoraws.ecs.task.idresource attribute.If no resource attribute is found in the resource attribute map, the placeholder is replaced by the
undefinedvalue.log_stream_nameYou can use the
log_stream_nameparameter to customize the log stream name or set the defaultotel-streamvalue or the following placeholders:The
{ClusterName}placeholder is for searching for theClusterNameoraws.ecs.cluster.nameresource attribute.The
{ContainerInstanceId}placeholder is for searching for theContainerInstanceIdoraws.ecs.container.instance.idresource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type.The
{NodeName}placeholder is for searching for theNodeNameork8s.node.nameresource attribute.The
{TaskDefinitionFamily}placeholder is for searching for theTaskDefinitionFamilyoraws.ecs.task.familyresource attribute.The
{TaskId}placeholder is for searching for theTaskIdoraws.ecs.task.idresource attribute in the metrics data and replacing it with the actual task ID.If no resource attribute is found in the resource attribute map, the placeholder is replaced by the
undefinedvalue.resource_to_telemetry_conversion- Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default.
region- The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region.
endpoint-
Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). log_retention-
Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to
0, the logs never expire by default. Supported values for retention in days are1,3,5,7,14,30,60,90,120,150,180,365,400,545,731,1827,2192,2557,2922,3288, or3653. namespace- Optional. A custom namespace for the Amazon CloudWatch metrics.
role_arn- Optional. The AWS Identity and Access Management (IAM) role for uploading the metric segments to a different account.
4.11. AWS X-Ray Exporter Copy linkLink copied to clipboard!
The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments API and signs requests by using the AWS SDK for Go and the default credential provider chain.
The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter:
# ...
config:
exporters:
awsxray:
region: "<region>"
endpoint: <protocol><endpoint>
resource_arn: "<aws_resource_arn>"
role_arn: "<iam_role>"
indexed_attributes: [ "<indexed_attr_0>", "<indexed_attr_1>" ]
aws_log_groups: ["<group1>", "<group2>"]
request_timeout_seconds: 120
# ...
where:
region-
Destination region for the X-Ray segments sent to the AWS X-Ray service. For example,
eu-west-1. endpoint-
Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). resource_arn- Amazon Resource Name (ARN) of the AWS resource that is running the Collector.
role_arn- AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account.
indexed_attributes- List of attribute names to be converted to X-Ray annotations.
aws_log_groups- List of log group names for Amazon CloudWatch Logs.
request_timeout_seconds-
Time duration in seconds before timing out a request. If omitted, the default value is
30.
4.12. File Exporter Copy linkLink copied to clipboard!
The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files.
The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
With this exporter, you can also use a resource attribute to control file naming.
The only required setting is path, which specifies the destination path for telemetry files in the persistent-volume file system.
The following is an OpenTelemetry Collector custom resource with the enabled File Exporter:
# ...
config: |
exporters:
file:
path: /data/metrics.json
rotation:
max_megabytes: 10
max_days: 3
max_backups: 3
localtime: true
format: proto
compression: zstd
flush_interval: 5
# ...
where:
path- File-system path where the data is to be written. There is no default.
rotation-
File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the
rotationsetting to enable file rotation. max_megabytes-
Maximum size a file is allowed to reach until it is rotated. The default is
100. max_days- How many days a file is to be retained, counting from the timestamp in the file name. There is no default.
max_backups-
Maximum number of backups for retaining several older files. The default is
100. localtime- Local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC).
format-
Format for encoding the telemetry data before writing it to a file. The default format is
json. Theprotoformat is also supported. compression-
Optional file compression setting that defines the compression algorithm for the data that is exported to a file. Currently, only the
zstdcompression algorithm is supported. There is no default. flush_interval-
Time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the
rotationsettings.
4.13. Google Cloud Exporter Copy linkLink copied to clipboard!
The Google Cloud Exporter sends telemetry data to Google Cloud Operations Suite. Using the Google Cloud Exporter, you can export metrics to Google Cloud Monitoring, logs to Google Cloud Logging, and traces to Google Cloud Trace.
The Google Cloud Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the enabled Google Cloud Exporter:
# ...
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
volumeMounts:
- name: google-application-credentials
mountPath: /var/secrets/google
readOnly: true
volumes:
- name: google-application-credentials
secret:
secretName: google-application-credentials
config:
exporters:
googlecloud:
project:
# ...
where:
value-
GOOGLE_APPLICATION_CREDENTIALSenvironment variable that points to the authenticationkey.jsonfile. Thekey.jsonfile is mounted as a secret volume to the OpenTelemetry Collector. projectOptional. The project identifier. If not specified, the project is automatically determined from the credentials.
By default, the exporter sends telemetry data to the project specified in the
projectfield of the exporter’s configuration. You can have an override set up on a per-metric basis by using thegcp.project.idresource attribute. For example, if a metric has a label project, you can use the Group-by-Attributes Processor to promote it to a resource label, and then use the Resource Processor to rename the attribute fromprojecttogcp.project.id.
Chapter 5. Connectors Copy linkLink copied to clipboard!
5.1. Connectors overview Copy linkLink copied to clipboard!
A connector connects two pipelines.
It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.
Currently, the following General Availability and Technology Preview connectors are available for the Red Hat build of OpenTelemetry.
5.2. Count Connector Copy linkLink copied to clipboard!
The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines.
The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following are the default metric names:
-
trace.span.count -
trace.span.event.count -
metric.count -
metric.datapoint.count -
log.record.count
You can also expose custom metric names.
The following is an OpenTelemetry Collector custom resource (CR) with an enabled Count Connector:
# ...
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
prometheus:
endpoint: 0.0.0.0:8889
connectors:
count: {}
service:
pipelines:
traces/in:
receivers: [otlp]
exporters: [count]
metrics/out:
receivers: [count]
exporters: [prometheus]
# ...
where:
pipelines- Configures the Count Connector as an exporter or receiver in the pipeline and exports the generated metrics to the correct exporter.
exporters- Configures the Count Connector to receive spans as an exporter.
receivers- Configures the Count Connector to emit generated metrics as a receiver.
If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data.
The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans, spanevents, metrics, datapoints, or logs. See the next example.
The following is an example OpenTelemetry Collector CR for the Count Connector to count spans by conditions:
# ...
config:
connectors:
count:
spans:
<custom_metric_name>:
description: "<custom_metric_description>"
conditions:
- 'attributes["env"] == "dev"'
- 'name == "devevent"'
# ...
where:
spans- In this example, the exposed metric counts spans with the specified conditions.
<custom_metric_name>-
You can specify a custom metric name such as
cluster.prod.event.count.
Write conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors.
The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans, spanevents, metrics, datapoints, or logs. See the next example. The attribute keys are injected into the telemetry data. You must define a value for the default_value field for missing attributes.
The following is an example OpenTelemetry Collector CR for the Count Connector to count logs by attributes:
# ...
config:
connectors:
count:
logs:
<custom_metric_name>:
description: "<custom_metric_description>"
attributes:
- key: env
default_value: unknown
# ...
where:
logs- Specifies attributes for logs.
<custom_metric_name>-
You can specify a custom metric name such as
my.log.count. default_value- Defines a default value when the attribute is not set.
5.3. Routing Connector Copy linkLink copied to clipboard!
The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements.
The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with an enabled Routing Connector:
# ...
config:
connectors:
routing:
table:
- statement: route() where attributes["X-Tenant"] == "dev"
pipelines: [traces/dev]
- statement: route() where attributes["X-Tenant"] == "prod"
pipelines: [traces/prod]
default_pipelines: [traces/dev]
error_mode: ignore
match_once: false
service:
pipelines:
traces/in:
receivers: [otlp]
exporters: [routing]
traces/dev:
receivers: [routing]
exporters: [otlp/dev]
traces/prod:
receivers: [routing]
exporters: [otlp/prod]
# ...
where:
table- Connector routing table.
statement- Routing conditions written as OTTL statements.
pipelines- Destination pipelines for routing the matching telemetry data.
default_pipelines- Destination pipelines for routing the telemetry data for which no routing condition is satisfied.
error_mode-
Error-handling mode. The
propagatevalue is for logging an error and dropping the payload. Theignorevalue is for ignoring the condition and attempting to match with the next one. Thesilentvalue is the same asignorebut without logging the error. The default ispropagate. match_once-
When set to
true, the payload is routed only to the first pipeline whose routing condition is met. The default isfalse.
5.4. Forward Connector Copy linkLink copied to clipboard!
The Forward Connector merges two pipelines of the same type.
The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with an enabled Forward Connector:
# ...
config:
receivers:
otlp:
protocols:
grpc:
jaeger:
protocols:
grpc:
processors:
batch:
exporters:
otlp/traces:
endpoint: tempo-simplest-distributor:4317
tls:
insecure: true
connectors:
forward: {}
service:
pipelines:
traces/regiona:
receivers: [otlp]
processors: []
exporters: [forward]
traces/regionb:
receivers: [jaeger]
processors: []
exporters: [forward]
traces:
receivers: [forward]
processors: [batch]
exporters: [otlp/traces]
# ...
5.5. Span Metrics Connector Copy linkLink copied to clipboard!
The Span Metrics Connector aggregates request, error, and duration (RED) OpenTelemetry metrics from span data.
The following is an OpenTelemetry Collector custom resource with an enabled Span Metrics Connector:
# ...
config:
connectors:
spanmetrics:
metrics_flush_interval: 15s
service:
pipelines:
traces:
exporters: [spanmetrics]
metrics:
receivers: [spanmetrics]
# ...
where:
metrics_flush_interval-
Defines the flush interval of the generated metrics. Defaults to
15s.
Chapter 6. Extensions Copy linkLink copied to clipboard!
Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. Currently, the following General Availability and Technology Preview extensions are available for the Red Hat build of OpenTelemetry.
6.1. BearerTokenAuth Extension Copy linkLink copied to clipboard!
The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs.
The following is an OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension:
# ...
config:
extensions:
bearertokenauth:
scheme: "Bearer"
token: "<token>"
filename: "<token_file>"
receivers:
otlp:
protocols:
http:
auth:
authenticator: bearertokenauth
exporters:
otlp:
auth:
authenticator: bearertokenauth
service:
extensions: [bearertokenauth]
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
# ...
where:
scheme-
You can configure the BearerTokenAuth Extension to send a custom
scheme. The default isBearer. token- You can add the BearerTokenAuth Extension token as metadata to identify a message.
filename- Path to a file that contains an authorization token that is transmitted with every message.
http.auth.authenticator- You can assign the authenticator configuration to an OTLP Receiver.
otlp.auth.authenticator- You can assign the authenticator configuration to an OTLP Exporter.
6.2. OAuth2Client Extension Copy linkLink copied to clipboard!
The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs.
The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension:
# ...
config:
extensions:
oauth2client:
client_id: <client_id>
client_secret: <client_secret>
endpoint_params:
audience: <audience>
token_url: https://example.com/oauth2/default/v1/token
scopes: ["api.metrics"]
# tls settings for the token client
tls:
insecure: true
ca_file: /var/lib/mycert.pem
cert_file: <cert_file>
key_file: <key_file>
timeout: 2s
receivers:
otlp:
protocols:
http: {}
exporters:
otlp:
auth:
authenticator: oauth2client
service:
extensions: [oauth2client]
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
# ...
where:
client_id- Client identifier, which is provided by the identity provider.
client_secret- Confidential key used to authenticate the client to the identity provider.
endpoint_params-
Further metadata, in the key-value pair format, which is transferred during authentication. For example,
audiencespecifies the intended audience for the access token, indicating the recipient of the token. token_url- URL of the OAuth2 token endpoint, where the Collector requests access tokens.
scopes- Scopes define the specific permissions or access levels requested by the client.
tls- Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens.
insecure-
When set to
true, configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. ca_file- Path to a Certificate Authority (CA) file that is used to verify the server’s certificate during the TLS handshake.
cert_file- Path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required.
key_file- Path to the client’s private key file that is used with the client certificate if needed for authentication.
timeout- Sets a timeout for the token client’s request.
authenticator- You can assign the authenticator configuration to an OTLP exporter.
6.3. File Storage Extension Copy linkLink copied to clipboard!
The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols.
The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist.
The following is an OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue:
# ...
config:
extensions:
file_storage/all_settings:
directory: /var/lib/otelcol/mydir
timeout: 1s
compaction:
on_start: true
directory: /tmp/
max_transaction_size: 65_536
fsync: false
exporters:
otlp:
sending_queue:
storage: file_storage/all_settings
service:
extensions: [file_storage/all_settings]
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
# ...
where:
file_storage/all_settings.directory- The directory in which the telemetry data is stored.
timeout- The timeout time interval for opening the stored files.
on_start-
Starts compaction when the Collector starts. If omitted, the default is
false. file_storage/all_settings.compaction.directory- The directory in which the compactor stores the telemetry data.
max_transaction_size-
Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is
65536bytes. fsync-
When set, forces the database to perform an
fsynccall after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance. storage- Buffers the OTLP Exporter data on the local file system.
extensions- Starts the File Storage Extension by the Collector.
6.4. OIDC Auth Extension Copy linkLink copied to clipboard!
The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request.
The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the configured OIDC Auth Extension:
# ...
config:
extensions:
oidc:
attribute: authorization
issuer_url: https://example.com/auth/realms/opentelemetry
issuer_ca_path: /var/run/tls/issuer.pem
audience: otel-collector
username_claim: email
receivers:
otlp:
protocols:
grpc:
auth:
authenticator: oidc
exporters:
debug: {}
service:
extensions: [oidc]
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
# ...
where:
attribute-
Name of the header that contains the ID token. The default name is
authorization. issuer_url- Base URL of the OIDC provider.
issuer_ca_path- Optional: The path to the issuer’s CA certificate.
audience- Audience for the token.
username_claim-
Name of the claim that contains the username. The default name is
sub.
6.5. Jaeger Remote Sampling Extension Copy linkLink copied to clipboard!
The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger’s remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system.
The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension:
# ...
config:
extensions:
jaegerremotesampling:
source:
reload_interval: 30s
remote:
endpoint: jaeger-collector:14250
file: /etc/otelcol/sampling_strategies.json
receivers:
otlp:
protocols:
http: {}
exporters:
debug: {}
service:
extensions: [jaegerremotesampling]
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
# ...
where:
reload_interval- Time interval at which the sampling configuration is updated.
endpoint- Endpoint for reaching the Jaeger remote sampling strategy provider.
file- Path to a local file that contains a sampling strategy configuration in the JSON format.
The following is an example of a Jaeger Remote Sampling strategy file:
{
"service_strategies": [
{
"service": "foo",
"type": "probabilistic",
"param": 0.8,
"operation_strategies": [
{
"operation": "op1",
"type": "probabilistic",
"param": 0.2
},
{
"operation": "op2",
"type": "probabilistic",
"param": 0.4
}
]
},
{
"service": "bar",
"type": "ratelimiting",
"param": 5
}
],
"default_strategy": {
"type": "probabilistic",
"param": 0.5,
"operation_strategies": [
{
"operation": "/health",
"type": "probabilistic",
"param": 0.0
},
{
"operation": "/metrics",
"type": "probabilistic",
"param": 0.0
}
]
}
}
6.6. Performance Profiler Extension Copy linkLink copied to clipboard!
The Performance Profiler Extension enables the Go net/http/pprof endpoint. Developers use this extension to collect performance profiles and investigate issues with the service.
The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the configured Performance Profiler Extension:
# ...
config:
extensions:
pprof:
endpoint: localhost:1777
block_profile_fraction: 0
mutex_profile_fraction: 0
save_to_file: test.pprof
receivers:
otlp:
protocols:
http: {}
exporters:
debug: {}
service:
extensions: [pprof]
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
# ...
where:
endpoint-
Endpoint at which this extension listens. Use
localhost:to make it available only locally or":"to make it available on all network interfaces. The default value islocalhost:1777. block_profile_fraction-
Sets a fraction of blocking events to be profiled. To disable profiling, set this to
0or a negative integer. See the documentation for theruntimepackage. The default value is0. mutex_profile_fraction-
Set a fraction of mutex contention events to be profiled. To disable profiling, set this to
0or a negative integer. See the documentation for theruntimepackage. The default value is0. save_to_file- Name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated.
6.7. Google Client Authorization Extension Copy linkLink copied to clipboard!
The Google Client Authorization Extension provides Google OAuth 2.0 client credentials and metadata for gRPC and HTTP exporters.
The following is an OpenTelemetry Collector custom resource with the Google Client Authorization Extension:
# ...
config:
extensions:
googleclientauth:
project: "<google_cloud_project>"
exporters:
otlphttp:
encoding: json
endpoint: https://telemetry.googleapis.com
auth:
authenticator: googleclientauth
service:
extensions: [googleclientauth]
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
# ...
where:
project-
Google Cloud project ID. The extension sends telemetry to the Google Cloud project that you specify by using this field, which is an alternative to using the
gcp.project.idresource attribute. If you do not specify a Google Cloud project by using this field, the extension gets it by using the application default credentials.
6.8. Health Check Extension Copy linkLink copied to clipboard!
The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift.
The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following is an OpenTelemetry Collector custom resource with the configured Health Check Extension:
# ...
config:
extensions:
health_check:
endpoint: "0.0.0.0:13133"
tls:
ca_file: "/path/to/ca.crt"
cert_file: "/path/to/cert.crt"
key_file: "/path/to/key.key"
path: "/health/status"
check_collector_pipeline:
enabled: true
interval: "5m"
exporter_failure_threshold: 5
receivers:
otlp:
protocols:
http: {}
exporters:
debug: {}
service:
extensions: [health_check]
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
# ...
where:
endpoint-
Target IP address for publishing the health check status. The default is
0.0.0.0:13133. tls- TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled.
path-
Path for the health check server. The default is
/. check_collector_pipeline- Settings for the Collector pipeline health check.
enabled-
Enables the Collector pipeline health check. The default is
false. interval-
Time interval for checking the number of failures. The default is
5m. exporter_failure_threshold-
Threshold of multiple failures until which a container is still marked as healthy. The default is
5.
6.9. zPages Extension Copy linkLink copied to clipboard!
The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time.
The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint.
The following is an OpenTelemetry Collector custom resource with the configured zPages Extension:
# ...
config:
extensions:
zpages:
endpoint: "localhost:55679"
receivers:
otlp:
protocols:
http: {}
exporters:
debug: {}
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
# ...
where:
endpoint-
The HTTP endpoint for serving the zPages extension. The default is
localhost:55679.
Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route.
The CLI command for enabling port-forwarding is as follows:
$ oc port-forward pod/$(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679
The Collector provides the following zPages for diagnostics:
- ServiceZ
-
Shows an overview of the Collector services and links to the following zPages: PipelineZ, ExtensionZ, and FeatureZ. This page also displays information about the build version and runtime. An example of this page’s URL is
http://localhost:55679/debug/servicez. - PipelineZ
-
Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page’s URL is
http://localhost:55679/debug/pipelinez. - ExtensionZ
-
Shows the currently active extensions in the Collector. An example of this page’s URL is
http://localhost:55679/debug/extensionz. - FeatureZ
-
Shows the feature gates enabled in the Collector along with their status and description. An example of this page’s URL is
http://localhost:55679/debug/featurez. - TraceZ
-
Shows spans categorized by latency. Available time ranges include 0 µs, 10 µs, 100 µs, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page’s URL is
http://localhost:55679/debug/tracez.