Chapter 10. Distributed tracing
This chapter outlines the support for distributed tracing in AMQ Streams, using Jaeger.
How you configure distributed tracing varies by AMQ Streams client and component.
- You instrument Kafka Producer, Consumer, and Streams API applications for distributed tracing using an OpenTracing client library. This involves adding instrumentation code to these clients, which monitors the execution of individual transactions in order to generate trace data.
- Distributed tracing support is built in to the Kafka Connect and Mirror Maker components of AMQ Streams. To configure these components for distributed tracing, you configure and update the relevant custom resources.
Before configuring distributed tracing in AMQ Streams clients and components, you must first initialize and configure a Jaeger tracer in the Kafka cluster, as described in Initializing a Jaeger tracer for Kafka clients.
Distributed tracing is not supported for Kafka brokers.
10.1. Overview of distributed tracing in AMQ Streams
Distributed tracing allows developers and system administrators to track the progress of transactions between applications (and services in a microservice architecture) in a distributed system. This information is useful for monitoring application performance and investigating issues with target systems and end-user applications.
In AMQ Streams and data streaming platforms in general, distributed tracing facilitates the end-to-end tracking of messages: from source systems to the Kafka cluster and then to target systems and applications.
As an aspect of system observability, distributed tracing complements the metrics that are available to view in Grafana dashboards and the available loggers for each component.
OpenTracing overview
Distributed tracing in AMQ Streams is implemented using the open source OpenTracing and Jaeger projects.
The OpenTracing specification defines APIs that developers can use to instrument applications for distributed tracing. It is independent from the tracing system.
When instrumented, applications generate traces for individual transactions. Traces are composed of spans, which define specific units of work.
To simplify the instrumentation of Kafka Producer, Consumer, and Streams API applications, AMQ Streams includes the OpenTracing Apache Kafka Client Instrumentation library.
The OpenTracing project is merging with the OpenCensus project. The new, combined project is named OpenTelemetry. OpenTelemetry will provide compatibility for applications that are instrumented using the OpenTracing APIs.
Jaeger overview
Jaeger, a tracing system, is an implementation of the OpenTracing APIs used for monitoring and troubleshooting microservices-based distributed systems. It consists of four main components and provides client libraries for instrumenting applications. You can use the Jaeger user interface to visualize, query, filter, and analyze trace data.
An example of a query in the Jaeger user interface
10.1.1. Distributed tracing support in AMQ Streams
In AMQ Streams, distributed tracing is supported in:
- Kafka Connect (including Kafka Connect with Source2Image support)
- Mirror Maker
You enable and configure distributed tracing for these components by setting template configuration properties in the relevant custom resource (for example, KafkaConnect
).
To enable distributed tracing in Kafka Producer, Consumer, and Streams API applications, you can instrument application code using the OpenTracing Apache Kafka Client Instrumentation library. When instrumented, these clients generate traces for messages (for example, when producing messages or writing offsets to the log).
Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. This trace data is useful for monitoring the performance of your Kafka cluster and debugging issues with target systems and applications.
Outline of procedures
To set up distributed tracing for AMQ Streams, follow these procedures:
This chapter covers setting up distributed tracing for AMQ Streams clients and components only. Setting up distributed tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, see the OpenTracing documentation and search for "inject and extract".
Before you start
Before you set up distributed tracing for AMQ Streams, it is helpful to understand:
- The basics of OpenTracing, including key concepts such as traces, spans, and tracers. Refer to the OpenTracing documentation.
- The components of the Jaeger architecure.
Prerequisites
- The Jaeger backend components are deployed to your OpenShift cluster. For deployment instructions, see the Jaeger deployment documentation.
10.2. Setting up tracing for Kafka clients
This section describes how to initialize a Jaeger tracer to allow you to instrument your client applications for distributed tracing.
10.2.1. Initializing a Jaeger tracer for Kafka clients
Configure and initialize a Jaeger tracer using a set of tracing environment variables.
Procedure
Perform the following steps for each client application.
Add Maven dependencies for Jaeger to the
pom.xml
file for the client application:<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.0.0.redhat-00002</version> </dependency>
- Define the configuration of the Jaeger tracer using the tracing environment variables.
Create the Jaeger tracer from the environment variables that you defined in step two:
Tracer tracer = Configuration.fromEnv().getTracer();
NoteFor alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation.
Register the Jaeger tracer as a global tracer:
GlobalTracer.register(tracer);
A Jaeger tracer is now initialized for the client application to use.
10.2.2. Tracing environment variables
Use these environment variables when configuring a Jaeger tracer for Kafka clients.
The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation.
Property | Required | Description |
---|---|---|
| Yes | The name of the Jaeger tracer service. |
| No |
The hostname for communicating with the |
| No |
The port used for communicating with the |
| No |
The traces endpoint. Only define this variable if the client application will bypass the |
| No | The authentication token to send to the endpoint as a bearer token. |
| No | The username to send to the endpoint if using basic authentication. |
| No | The password to send to the endpoint if using basic authentication. |
| No |
A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are |
| No | Indicates whether the reporter should also log the spans. |
| No | The reporter’s maximum queue size. |
| No | The reporter’s flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. |
| No | The sampling strategy to use for client traces: Constant, Probabilistic, Rate Limiting, or Remote (the default type). To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation. |
| No | The sampler parameter (number). |
| No | The hostname and port to use if a Remote sampling strategy is selected. |
| No | A comma-separated list of tracer-level tags that are added to all reported spans.
The value can also refer to an environment variable using the format |
Additional resources
10.3. Instrumenting Kafka clients with tracers
This section describes how to instrument Kafka Producer, Consumer, and Streams API applications for distributed tracing.
10.3.1. Instrumenting Kafka Producers and Consumers for tracing
Use a Decorator pattern or Interceptors to instrument your Java Producer and Consumer application code for distributed tracing.
Procedure
Perform these steps in the application code of each Kafka Producer and Consumer application.
Add the Maven dependency for OpenTracing to the Producer or Consumer’s
pom.xml
file.<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.4.redhat-00002</version> </dependency>
Instrument your client application code using either a Decorator pattern or Interceptors.
If you prefer to use a Decorator pattern, use following example:
// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);
If you prefer to use Interceptors, use the following example:
// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);
10.3.1.1. Custom span names in a Decorator pattern
A span is a logical unit of work in Jaeger, with an operation name, start time, and duration.
If you use a Decorator pattern to instrument your Kafka Producer and Consumer applications, you can define custom span names by passing a BiFunction
object as an additional argument when creating the TracingKafkaProducer
and TracingKafkaConsumer
objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names, which are described below.
Example: Using custom span names to instrument client application code in a Decorator pattern
// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE"
10.3.1.2. Built-in span names
When defining custom span names, you can use the following BiFunctions
in the ClientSpanNameProvider
class. If no spanNameProvider
is specified, CONSUMER_OPERATION_NAME
and PRODUCER_OPERATION_NAME
are used.
BiFunction | Description |
---|---|
|
Returns the |
|
Returns a String concatenation of |
|
Returns the name of the topic that the message was sent to or retrieved from in the format |
|
Returns a String concatenation of |
|
Returns the operation name and the topic name: |
|
Returns a String concatenation of |
10.3.2. Instrumenting Kafka Streams applications for tracing
This section describes how to instrument Kafka Streams API applications for distributed tracing.
Procedure
Perform the following steps for each Kafka Streams API application.
Add the
opentracing-kafka-streams
dependency to the pom.xml file for your Kafka Streams API application:<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.4.redhat-00002</version> </dependency>
Create an instance of the
TracingKafkaClientSupplier
supplier interface:KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);
Provide the supplier interface to
KafkaStreams
:KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();
10.4. Setting up tracing for Mirror Maker and Kafka Connect
Distributed tracing is supported for Mirror Maker and Kafka Connect (including Kafka Connect with Source2Image support).
For Mirror Maker, messages are traced from the source cluster to the target cluster; the trace data records messages entering and leaving the Mirror Maker component.
Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. For more information, see Section 3.2, “Kafka Connect cluster configuration”.
10.4.1. Enabling tracing in Mirror Maker and Kafka Connect resources
Update the configuration of KafkaMirrorMaker
, KafkaConnect
, and KafkaConnectS2I
custom resources to specify and configure a Jaeger tracer service for each resource. Updating a tracing-enabled resource in your OpenShift cluster triggers two events:
- Interceptor classes are updated in the integrated consumers and producers in Mirror Maker or Kafka Connect.
- The tracing agent initializes a Jaeger tracer based on the tracing configuration defined in the resource.
Procedure
Perform these steps for each KafkaMirrorMaker
, KafkaConnect
, and KafkaConnectS2I
resource.
In the
spec.template
property, configure the Jaeger tracer service. For example:Jaeger tracer configuration for Kafka Connect
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: 2 type: jaeger #...
Jaeger tracer configuration for Mirror Maker
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #...
- 1
- Use the tracing environment variables as template configuration properties.
- 2
- Set the
spec.tracing.type
property tojaeger
.
Create or update the resource:
oc apply -f your-file