Chapter 4. Exporters
Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.
Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry:
4.1. OTLP Exporter Copy linkLink copied to clipboard!
The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with the enabled OTLP Exporter
- 1
- The OTLP gRPC endpoint. If the
https://scheme is used, then client transport security is enabled and overrides theinsecuresetting in thetls. - 2
- The client-side TLS configuration. Defines paths to TLS certificates.
- 3
- Disables client transport security when set to
true. The default value isfalseby default. - 4
- Skips verifying the certificate when set to
true. The default value isfalse. - 5
- Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The
reload_intervalaccepts a string containing valid units of time such asns,usorµs,ms,s,m,h. - 6
- Overrides the virtual hostname of authority such as the authority header field in requests. You can use this for testing.
- 7
- Headers are sent for every request performed during an established connection.
4.2. OTLP HTTP Exporter Copy linkLink copied to clipboard!
The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP).
OpenTelemetry Collector custom resource with the enabled OTLP Exporter
- 1
- The OTLP HTTP endpoint. If the
https://scheme is used, then client transport security is enabled and overrides theinsecuresetting in thetls. - 2
- The client side TLS configuration. Defines paths to TLS certificates.
- 3
- Headers are sent in every HTTP request.
- 4
- If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request.
4.3. Debug Exporter Copy linkLink copied to clipboard!
The Debug Exporter prints traces and metrics to the standard output.
OpenTelemetry Collector custom resource with the enabled Debug Exporter
- 1
- Verbosity of the debug export:
detailed,normal, orbasic. When set todetailed, pipeline data are verbosely logged. Defaults tonormal. - 2
- Initial number of messages logged per second. The default value is
2messages per second. - 3
- Sampling rate after the initial number of messages, the value in
sampling_initial, has been logged. Disabled by default with the default1value. Sampling is enabled with values greater than1. For more information, see the page for the sampler function in thezapcorepackage on the Go Project’s website. - 4
- When set to
true, enables output from the Collector’s internal logger for the exporter.
4.4. Load Balancing Exporter Copy linkLink copied to clipboard!
The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration.
The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter
- 1
- The
routing_key: serviceexports spans for the same service name to the same Collector instance to provide accurate aggregation. Therouting_key: traceIDexports spans based on theirtraceID. The implicit default istraceIDbased routing. - 2
- The OTLP is the only supported load balancing protocol. All options of the OTLP exporter are supported.
- 3
- You can configure only one resolver.
- 4
- The static resolver distributes the load across the listed endpoints.
- 5
- You can use the DNS resolver only with a Kubernetes headless service.
- 6
- The Kubernetes resolver is recommended.
4.5. Prometheus Exporter Copy linkLink copied to clipboard!
The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats.
The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Prometheus Exporter
- 1
- The network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the
endpointfield to the<instance_name>-collectorservice. - 2
- The server-side TLS configuration. Defines paths to TLS certificates.
- 3
- If set, exports metrics under the provided value.
- 4
- Key-value pair labels that are applied for every exported metric.
- 5
- If
true, metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such ascounter. Disabled by default. - 6
- If
enabledistrue, all the resource attributes are converted to metric labels. Disabled by default. - 7
- Defines how long metrics are exposed without updates. The default is
5m. - 8
- Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is
true.
When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true, the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics.
4.6. Prometheus Remote Write Exporter Copy linkLink copied to clipboard!
The Prometheus Remote Write Exporter exports metrics to compatible back ends.
The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter
- 1
- Endpoint for sending the metrics.
- 2
- Server-side TLS configuration. Defines paths to TLS certificates.
- 3
- When set to
true, creates atarget_infometric for each resource metric. - 4
- When set to
true, exports a_createdmetric for the Summary, Histogram, and Monotonic Sum metric points. - 5
- Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is
3000000, which is approximately 2.861 megabytes.
- This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics.
-
You must enable the
--web.enable-remote-write-receiverfeature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails.
4.7. Kafka Exporter Copy linkLink copied to clipboard!
The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency.
OpenTelemetry Collector custom resource with the enabled Kafka Exporter
- 1
- The list of Kafka brokers. The default is
localhost:9092. - 2
- The Kafka protocol version. For example,
2.0.0. This is a required field. - 3
- The name of the Kafka topic to read from. The following are the defaults:
otlp_spansfor traces,otlp_metricsfor metrics,otlp_logsfor logs. - 4
- The plain text authentication configuration. If omitted, plain text authentication is disabled.
- 5
- The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled.
- 6
- Disables verifying the server’s certificate chain and hostname. The default is
false. - 7
- ServerName indicates the name of the server requested by the client to support virtual hosting.
4.8. AWS CloudWatch Logs Exporter Copy linkLink copied to clipboard!
The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain.
The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter
- 1
- Required. If the log group does not exist yet, it is automatically created.
- 2
- Required. If the log stream does not exist yet, it is automatically created.
- 3
- Optional. If the AWS region is not already set in the default credential chain, you must specify it.
- 4
- Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). - 5
- Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to
0, the logs never expire by default. Supported values for retention in days are1,3,5,7,14,30,60,90,120,150,180,365,400,545,731,1827,2192,2557,2922,3288, or3653. - 6
- Optional. The AWS Identity and Access Management (IAM) role for uploading the log segments to a different account.
4.9. AWS EMF Exporter Copy linkLink copied to clipboard!
The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF):
-
Int64DataPoints -
DoubleDataPoints -
SummaryDataPoints
The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents API.
One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter
- 1
- You can use the
log_group_nameparameter to customize the log group name or set the default/metrics/defaultvalue or the following placeholders:The
/aws/metrics/{ClusterName}placeholder is for searching for theClusterNameoraws.ecs.cluster.nameresource attribute in the metrics data and replacing it with the actual cluster name.The
{NodeName}placeholder is for searching for theNodeNameork8s.node.nameresource attribute.The
{TaskId}placeholder is for searching for theTaskIdoraws.ecs.task.idresource attribute.If no resource attribute is found in the resource attribute map, the placeholder is replaced by the
undefinedvalue. - 2
- You can use the
log_stream_nameparameter to customize the log stream name or set the defaultotel-streamvalue or the following placeholders:The
{ClusterName}placeholder is for searching for theClusterNameoraws.ecs.cluster.nameresource attribute.The
{ContainerInstanceId}placeholder is for searching for theContainerInstanceIdoraws.ecs.container.instance.idresource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type.The
{NodeName}placeholder is for searching for theNodeNameork8s.node.nameresource attribute.The
{TaskDefinitionFamily}placeholder is for searching for theTaskDefinitionFamilyoraws.ecs.task.familyresource attribute.The
{TaskId}placeholder is for searching for theTaskIdoraws.ecs.task.idresource attribute in the metrics data and replacing it with the actual task ID.If no resource attribute is found in the resource attribute map, the placeholder is replaced by the
undefinedvalue. - 3
- Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default.
- 4
- The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region.
- 5
- Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). - 6
- Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to
0, the logs never expire by default. Supported values for retention in days are1,3,5,7,14,30,60,90,120,150,180,365,400,545,731,1827,2192,2557,2922,3288, or3653. - 7
- Optional. A custom namespace for the Amazon CloudWatch metrics.
- 8
- Optional. The AWS Identity and Access Management (IAM) role for uploading the metric segments to a different account.
4.10. AWS X-Ray Exporter Copy linkLink copied to clipboard!
The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments API and signs requests by using the AWS SDK for Go and the default credential provider chain.
The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter
- 1
- The destination region for the X-Ray segments sent to the AWS X-Ray service. For example,
eu-west-1. - 2
- Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as
https://, as part of the endpoint value. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). - 3
- The Amazon Resource Name (ARN) of the AWS resource that is running the Collector.
- 4
- The AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account.
- 5
- The list of attribute names to be converted to X-Ray annotations.
- 6
- The list of log group names for Amazon CloudWatch Logs.
- 7
- Time duration in seconds before timing out a request. If omitted, the default value is
30.
4.11. File Exporter Copy linkLink copied to clipboard!
The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files. With this exporter, you can also use a resource attribute to control file naming. The only required setting is path, which specifies the destination path for telemetry files in the persistent-volume file system.
The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled File Exporter
- 1
- The file-system path where the data is to be written. There is no default.
- 2
- File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the
rotationsetting to enable file rotation. - 3
- The
max_megabytessetting is the maximum size a file is allowed to reach until it is rotated. The default is100. - 4
- The
max_dayssetting is for how many days a file is to be retained, counting from the timestamp in the file name. There is no default. - 5
- The
max_backupssetting is for retaining several older files. The default is100. - 6
- The
localtimesetting specifies the local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC). - 7
- The format for encoding the telemetry data before writing it to a file. The default format is
json. Theprotoformat is also supported. - 8
- File compression is optional and not set by default. This setting defines the compression algorithm for the data that is exported to a file. Currently, only the
zstdcompression algorithm is supported. There is no default. - 9
- The time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the
rotationsettings.
4.12. Google Cloud Exporter Copy linkLink copied to clipboard!
The Google Cloud Exporter sends telemetry data to Google Cloud Operations Suite. Using the Google Cloud Exporter, you can export metrics to Google Cloud Monitoring, logs to Google Cloud Logging, and traces to Google Cloud Trace.
The Google Cloud Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenTelemetry Collector custom resource with the enabled Google Cloud Exporter
- 1
- The
GOOGLE_APPLICATION_CREDENTIALSenvironment variable that points to the authenticationkey.jsonfile. Thekey.jsonfile is mounted as a secret volume to the OpenTelemetry Collector. - 2
- Optional. The project identifier. If not specified, the project is automatically determined from the credentials.
By default, the exporter sends telemetry data to the project specified in the
projectfield of the exporter’s configuration. You can have an override set up on a per-metric basis by using thegcp.project.idresource attribute. For example, if a metric has a label project, you can use the Group-by-Attributes Processor to promote it to a resource label, and then use the Resource Processor to rename the attribute fromprojecttogcp.project.id.